* [PATCH] TOMOYO: Add garbage collector support. (v3)
@ 2009-06-17 11:19 Tetsuo Handa
2009-06-17 11:21 ` [PATCH 1/3] TOMOYO: Move sleeping operations to outside the semaphore Tetsuo Handa
` (4 more replies)
0 siblings, 5 replies; 16+ messages in thread
From: Tetsuo Handa @ 2009-06-17 11:19 UTC (permalink / raw)
To: linux-security-module, linux-kernel; +Cc: paulmck
Hello.
This patchset adds garbage collector for TOMOYO.
This time, I'm using some sort of RCU-like approach instead of cookie-list
approach.
TOMOYO 1/3: Move sleeping operations to outside the semaphore.
6 files changed, 231 insertions(+), 345 deletions(-)
TOMOYO 2/3: Replace tomoyo_save_name() with tomoyo_get_name()/tomoyo_put_name().
5 files changed, 70 insertions(+), 23 deletions(-)
TOMOYO 3/3: Add RCU-like garbage collector.
7 files changed, 733 insertions(+), 358 deletions(-)
Paul E. McKenney wrote ( http://lkml.org/lkml/2009/5/27/2 ) :
> I would also recommend the three-part LWN series as a starting point:
>
> # http://lwn.net/Articles/262464/ (What is RCU, Fundamentally?)
> # http://lwn.net/Articles/263130/ (What is RCU's Usage?)
> # http://lwn.net/Articles/264090/ (What is RCU's API?)
I've read these articles. They are very good.
I came up with an idea that we may be able to implement GC while readers are
permitted to sleep but no read locks are required.
The idea is to have two counters which hold the number of readers currently
reading the list, one is active and the other is inactive. Reader increments
the currently active counter before starts reading and decrements that counter
after finished reading. GC swaps active counter and inactive counter and waits
for previously active counter's count to become 0 before releasing elements
removed from the list.
Code is shown below.
atomic_t users_counter[2];
atomic_t users_counter_idx;
DEFINE_MUTEX(updator_mutex);
DEFINE_MUTEX(gc_mutex);
--- reader ---
{
/* Get counter index. */
int idx = atomic_read(&users_counter_idx);
/* Lock counter. */
atomic_inc(&users_counter[idx]);
list_for_each_entry_rcu() {
... /* Allowed to sleep. */
}
/* Unlock counter. */
atomic_dec(&users_counter[idx]);
}
--- writer ---
{
bool found = false;
/* Get lock for writing. */
mutex_lock(&updater_mutex);
list_for_each_entry_rcu() {
if (...)
continue;
found = true;
break;
}
if (!found)
list_add_rcu(element);
/* Release lock for writing. */
mutex_unlock(&updater_mutex);
}
--- garbage collector ---
{
bool element_deleted = false;
/* Protect the counters from concurrent GC threads. */
mutex_lock(&gc_mutex);
/* Get lock for writing. */
mutex_lock(&updater_mutex);
list_for_each_entry_rcu() {
if (...)
continue;
list_del_rcu(element);
element_deleted = true;
break;
}
/* Release lock for writing. */
mutex_unlock(&updater_mutex);
if (element_deleted) {
/* Swap active counter. */
const int idx = atomic_read(&users_counter_idx);
atomic_set(&users_counter_idx, idx ^ 1);
/*
* Wait for readers who are using previously active counter.
* This is similar to synchronize_rcu() while this code allows
* readers to do operations which may sleep.
*/
while (atomic_read(&users_counter[idx]))
msleep(1000);
/*
* Nobody is using previously active counter.
* Ready to release memory of elements removed before
* previously active counter became inactive.
*/
kfree(element);
}
mutex_unlock(&gc_mutex);
}
In this idea, GC's kfree() call may be deferred for unknown duration, but
defer duration will not matter if we use a dedicated kernel thread for GC.
I noticed that there is QRCU in the "RCU has a Family of Wait-to-Finish APIs"
section. My idea seems to resemble QRCU except grace periods.
But "Availability" field is empty. Oh, what happened to QRCU?
Regards.
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH 1/3] TOMOYO: Move sleeping operations to outside the semaphore. 2009-06-17 11:19 [PATCH] TOMOYO: Add garbage collector support. (v3) Tetsuo Handa @ 2009-06-17 11:21 ` Tetsuo Handa 2009-06-17 11:22 ` [PATCH 2/3] TOMOYO: Replace tomoyo_save_name() with tomoyo_get_name()/tomoyo_put_name() Tetsuo Handa ` (3 subsequent siblings) 4 siblings, 0 replies; 16+ messages in thread From: Tetsuo Handa @ 2009-06-17 11:21 UTC (permalink / raw) To: linux-security-module, linux-kernel TOMOYO is using rw_semaphore for protecting list elements. But TOMOYO is doing operations which might sleep inside down_write(). This patch makes TOMOYO's sleeping operations go outside down_write(). Signed-off-by: Kentaro Takeda <takedakn@nttdata.co.jp> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> --- security/tomoyo/common.c | 96 ++++++++---------------- security/tomoyo/common.h | 26 ++---- security/tomoyo/domain.c | 135 ++++++++++++++-------------------- security/tomoyo/file.c | 135 +++++++++++++++++----------------- security/tomoyo/realpath.c | 177 +++++++++++++++------------------------------ security/tomoyo/realpath.h | 7 - 6 files changed, 231 insertions(+), 345 deletions(-) --- security-testing-2.6.git.orig/security/tomoyo/common.c +++ security-testing-2.6.git/security/tomoyo/common.c @@ -56,6 +56,7 @@ static struct tomoyo_profile { unsigned int value[TOMOYO_MAX_CONTROL_INDEX]; const struct tomoyo_path_info *comment; } *tomoyo_profile_ptr[TOMOYO_MAX_PROFILES]; +static DEFINE_SPINLOCK(tomoyo_profile_ptr_lock); /* Permit policy management by non-root user? */ static bool tomoyo_manage_by_non_root; @@ -871,25 +872,29 @@ bool tomoyo_domain_quota_is_ok(struct to static struct tomoyo_profile *tomoyo_find_or_assign_new_profile(const unsigned int profile) { - static DEFINE_MUTEX(lock); - struct tomoyo_profile *ptr = NULL; - int i; + struct tomoyo_profile *new_ptr = NULL; + struct tomoyo_profile *ptr; if (profile >= TOMOYO_MAX_PROFILES) return NULL; - mutex_lock(&lock); + spin_lock(&tomoyo_profile_ptr_lock); ptr = tomoyo_profile_ptr[profile]; + spin_unlock(&tomoyo_profile_ptr_lock); if (ptr) - goto ok; - ptr = tomoyo_alloc_element(sizeof(*ptr)); - if (!ptr) - goto ok; - for (i = 0; i < TOMOYO_MAX_CONTROL_INDEX; i++) - ptr->value[i] = tomoyo_control_array[i].current_value; - mb(); /* Avoid out-of-order execution. */ - tomoyo_profile_ptr[profile] = ptr; - ok: - mutex_unlock(&lock); + return ptr; + new_ptr = kmalloc(sizeof(*new_ptr), GFP_KERNEL); + spin_lock(&tomoyo_profile_ptr_lock); + if (tomoyo_memory_ok(new_ptr)) { + int i; + ptr = new_ptr; + new_ptr = NULL; + for (i = 0; i < TOMOYO_MAX_CONTROL_INDEX; i++) + ptr->value[i] = tomoyo_control_array[i].current_value; + mb(); /* Avoid out-of-order execution. */ + tomoyo_profile_ptr[profile] = ptr; + } + spin_unlock(&tomoyo_profile_ptr_lock); + kfree(new_ptr); return ptr; } @@ -1083,10 +1088,10 @@ static DECLARE_RWSEM(tomoyo_policy_manag static int tomoyo_update_manager_entry(const char *manager, const bool is_delete) { - struct tomoyo_policy_manager_entry *new_entry; + struct tomoyo_policy_manager_entry *new_entry = NULL; struct tomoyo_policy_manager_entry *ptr; const struct tomoyo_path_info *saved_manager; - int error = -ENOMEM; + int error = is_delete ? -ENOENT : -ENOMEM; bool is_domain = false; if (tomoyo_is_domain_def(manager)) { @@ -1100,27 +1105,25 @@ static int tomoyo_update_manager_entry(c saved_manager = tomoyo_save_name(manager); if (!saved_manager) return -ENOMEM; + if (!is_delete) + new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); down_write(&tomoyo_policy_manager_list_lock); list_for_each_entry(ptr, &tomoyo_policy_manager_list, list) { if (ptr->manager != saved_manager) continue; ptr->is_deleted = is_delete; error = 0; - goto out; + break; } - if (is_delete) { - error = -ENOENT; - goto out; + if (!is_delete && error && tomoyo_memory_ok(new_entry)) { + new_entry->manager = saved_manager; + new_entry->is_domain = is_domain; + list_add_tail(&new_entry->list, &tomoyo_policy_manager_list); + new_entry = NULL; + error = 0; } - new_entry = tomoyo_alloc_element(sizeof(*new_entry)); - if (!new_entry) - goto out; - new_entry->manager = saved_manager; - new_entry->is_domain = is_domain; - list_add_tail(&new_entry->list, &tomoyo_policy_manager_list); - error = 0; - out: up_write(&tomoyo_policy_manager_list_lock); + kfree(new_entry); return error; } @@ -1361,8 +1364,7 @@ static int tomoyo_write_domain_policy(st return 0; } if (!strcmp(data, TOMOYO_KEYWORD_IGNORE_GLOBAL_ALLOW_READ)) { - tomoyo_set_domain_flag(domain, is_delete, - TOMOYO_DOMAIN_FLAGS_IGNORE_GLOBAL_ALLOW_READ); + domain->ignore_global_allow_read = !is_delete; return 0; } return tomoyo_write_file_policy(data, domain, is_delete); @@ -1516,10 +1518,9 @@ static int tomoyo_read_domain_policy(str /* Print domainname and flags. */ if (domain->quota_warned) quota_exceeded = "quota_exceeded\n"; - if (domain->flags & TOMOYO_DOMAIN_FLAGS_TRANSITION_FAILED) + if (domain->domain_transition_failed) transition_failed = "transition_failed\n"; - if (domain->flags & - TOMOYO_DOMAIN_FLAGS_IGNORE_GLOBAL_ALLOW_READ) + if (domain->ignore_global_allow_read) ignore_global_allow_read = TOMOYO_KEYWORD_IGNORE_GLOBAL_ALLOW_READ "\n"; done = tomoyo_io_printf(head, "%s\n" TOMOYO_KEYWORD_USE_PROFILE @@ -2119,35 +2120,6 @@ static int tomoyo_close_control(struct f } /** - * tomoyo_alloc_acl_element - Allocate permanent memory for ACL entry. - * - * @acl_type: Type of ACL entry. - * - * Returns pointer to the ACL entry on success, NULL otherwise. - */ -void *tomoyo_alloc_acl_element(const u8 acl_type) -{ - int len; - struct tomoyo_acl_info *ptr; - - switch (acl_type) { - case TOMOYO_TYPE_SINGLE_PATH_ACL: - len = sizeof(struct tomoyo_single_path_acl_record); - break; - case TOMOYO_TYPE_DOUBLE_PATH_ACL: - len = sizeof(struct tomoyo_double_path_acl_record); - break; - default: - return NULL; - } - ptr = tomoyo_alloc_element(len); - if (!ptr) - return NULL; - ptr->type = acl_type; - return ptr; -} - -/** * tomoyo_open - open() for /sys/kernel/security/tomoyo/ interface. * * @inode: Pointer to "struct inode". --- security-testing-2.6.git.orig/security/tomoyo/common.h +++ security-testing-2.6.git/security/tomoyo/common.h @@ -159,23 +159,20 @@ struct tomoyo_domain_info { u8 profile; /* Profile number to use. */ bool is_deleted; /* Delete flag. */ bool quota_warned; /* Quota warnning flag. */ - /* DOMAIN_FLAGS_*. Use tomoyo_set_domain_flag() to modify. */ - u8 flags; + /* Ignore "allow_read" directive in exception policy. */ + bool ignore_global_allow_read; + /* + * This domain was unable to create a new domain at + * tomoyo_find_next_domain() because the name of the domain to be + * created was too long or it could not allocate memory. + * More than one process continued execve() without domain transition. + */ + bool domain_transition_failed; }; /* Profile number is an integer between 0 and 255. */ #define TOMOYO_MAX_PROFILES 256 -/* Ignore "allow_read" directive in exception policy. */ -#define TOMOYO_DOMAIN_FLAGS_IGNORE_GLOBAL_ALLOW_READ 1 -/* - * This domain was unable to create a new domain at tomoyo_find_next_domain() - * because the name of the domain to be created was too long or - * it could not allocate memory. - * More than one process continued execve() without domain transition. - */ -#define TOMOYO_DOMAIN_FLAGS_TRANSITION_FAILED 2 - /* * tomoyo_single_path_acl_record is a structure which is used for holding an * entry with one pathname operation (e.g. open(), mkdir()). @@ -374,15 +371,10 @@ struct tomoyo_domain_info *tomoyo_find_o /* Check mode for specified functionality. */ unsigned int tomoyo_check_flags(const struct tomoyo_domain_info *domain, const u8 index); -/* Allocate memory for structures. */ -void *tomoyo_alloc_acl_element(const u8 acl_type); /* Fill in "struct tomoyo_path_info" members. */ void tomoyo_fill_path_info(struct tomoyo_path_info *ptr); /* Run policy loader when /sbin/init starts. */ void tomoyo_load_policy(const char *filename); -/* Change "struct tomoyo_domain_info"->flags. */ -void tomoyo_set_domain_flag(struct tomoyo_domain_info *domain, - const bool is_delete, const u8 flags); /* strcmp() for "struct tomoyo_path_info" structure. */ static inline bool tomoyo_pathcmp(const struct tomoyo_path_info *a, --- security-testing-2.6.git.orig/security/tomoyo/domain.c +++ security-testing-2.6.git/security/tomoyo/domain.c @@ -131,28 +131,6 @@ struct tomoyo_alias_entry { }; /** - * tomoyo_set_domain_flag - Set or clear domain's attribute flags. - * - * @domain: Pointer to "struct tomoyo_domain_info". - * @is_delete: True if it is a delete request. - * @flags: Flags to set or clear. - * - * Returns nothing. - */ -void tomoyo_set_domain_flag(struct tomoyo_domain_info *domain, - const bool is_delete, const u8 flags) -{ - /* We need to serialize because this is bitfield operation. */ - static DEFINE_SPINLOCK(lock); - spin_lock(&lock); - if (!is_delete) - domain->flags |= flags; - else - domain->flags &= ~flags; - spin_unlock(&lock); -} - -/** * tomoyo_get_last_name - Get last component of a domainname. * * @domain: Pointer to "struct tomoyo_domain_info". @@ -223,11 +201,11 @@ static int tomoyo_update_domain_initiali const bool is_not, const bool is_delete) { - struct tomoyo_domain_initializer_entry *new_entry; + struct tomoyo_domain_initializer_entry *new_entry = NULL; struct tomoyo_domain_initializer_entry *ptr; const struct tomoyo_path_info *saved_program; const struct tomoyo_path_info *saved_domainname = NULL; - int error = -ENOMEM; + int error = is_delete ? -ENOENT : -ENOMEM; bool is_last_name = false; if (!tomoyo_is_correct_path(program, 1, -1, -1, __func__)) @@ -245,6 +223,8 @@ static int tomoyo_update_domain_initiali saved_program = tomoyo_save_name(program); if (!saved_program) return -ENOMEM; + if (!is_delete) + new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); down_write(&tomoyo_domain_initializer_list_lock); list_for_each_entry(ptr, &tomoyo_domain_initializer_list, list) { if (ptr->is_not != is_not || @@ -253,23 +233,20 @@ static int tomoyo_update_domain_initiali continue; ptr->is_deleted = is_delete; error = 0; - goto out; + break; } - if (is_delete) { - error = -ENOENT; - goto out; + if (!is_delete && error && tomoyo_memory_ok(new_entry)) { + new_entry->domainname = saved_domainname; + new_entry->program = saved_program; + new_entry->is_not = is_not; + new_entry->is_last_name = is_last_name; + list_add_tail(&new_entry->list, + &tomoyo_domain_initializer_list); + new_entry = NULL; + error = 0; } - new_entry = tomoyo_alloc_element(sizeof(*new_entry)); - if (!new_entry) - goto out; - new_entry->domainname = saved_domainname; - new_entry->program = saved_program; - new_entry->is_not = is_not; - new_entry->is_last_name = is_last_name; - list_add_tail(&new_entry->list, &tomoyo_domain_initializer_list); - error = 0; - out: up_write(&tomoyo_domain_initializer_list_lock); + kfree(new_entry); return error; } @@ -436,11 +413,11 @@ static int tomoyo_update_domain_keeper_e const bool is_not, const bool is_delete) { - struct tomoyo_domain_keeper_entry *new_entry; + struct tomoyo_domain_keeper_entry *new_entry = NULL; struct tomoyo_domain_keeper_entry *ptr; const struct tomoyo_path_info *saved_domainname; const struct tomoyo_path_info *saved_program = NULL; - int error = -ENOMEM; + int error = is_delete ? -ENOENT : -ENOMEM; bool is_last_name = false; if (!tomoyo_is_domain_def(domainname) && @@ -458,6 +435,8 @@ static int tomoyo_update_domain_keeper_e saved_domainname = tomoyo_save_name(domainname); if (!saved_domainname) return -ENOMEM; + if (!is_delete) + new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); down_write(&tomoyo_domain_keeper_list_lock); list_for_each_entry(ptr, &tomoyo_domain_keeper_list, list) { if (ptr->is_not != is_not || @@ -466,23 +445,19 @@ static int tomoyo_update_domain_keeper_e continue; ptr->is_deleted = is_delete; error = 0; - goto out; + break; } - if (is_delete) { - error = -ENOENT; - goto out; + if (!is_delete && error && tomoyo_memory_ok(new_entry)) { + new_entry->domainname = saved_domainname; + new_entry->program = saved_program; + new_entry->is_not = is_not; + new_entry->is_last_name = is_last_name; + list_add_tail(&new_entry->list, &tomoyo_domain_keeper_list); + new_entry = NULL; + error = 0; } - new_entry = tomoyo_alloc_element(sizeof(*new_entry)); - if (!new_entry) - goto out; - new_entry->domainname = saved_domainname; - new_entry->program = saved_program; - new_entry->is_not = is_not; - new_entry->is_last_name = is_last_name; - list_add_tail(&new_entry->list, &tomoyo_domain_keeper_list); - error = 0; - out: up_write(&tomoyo_domain_keeper_list_lock); + kfree(new_entry); return error; } @@ -632,11 +607,11 @@ static int tomoyo_update_alias_entry(con const char *aliased_name, const bool is_delete) { - struct tomoyo_alias_entry *new_entry; + struct tomoyo_alias_entry *new_entry = NULL; struct tomoyo_alias_entry *ptr; const struct tomoyo_path_info *saved_original_name; const struct tomoyo_path_info *saved_aliased_name; - int error = -ENOMEM; + int error = is_delete ? -ENOENT : -ENOMEM; if (!tomoyo_is_correct_path(original_name, 1, -1, -1, __func__) || !tomoyo_is_correct_path(aliased_name, 1, -1, -1, __func__)) @@ -645,6 +620,8 @@ static int tomoyo_update_alias_entry(con saved_aliased_name = tomoyo_save_name(aliased_name); if (!saved_original_name || !saved_aliased_name) return -ENOMEM; + if (!is_delete) + new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); down_write(&tomoyo_alias_list_lock); list_for_each_entry(ptr, &tomoyo_alias_list, list) { if (ptr->original_name != saved_original_name || @@ -652,21 +629,17 @@ static int tomoyo_update_alias_entry(con continue; ptr->is_deleted = is_delete; error = 0; - goto out; + break; } - if (is_delete) { - error = -ENOENT; - goto out; + if (!is_delete && error && tomoyo_memory_ok(new_entry)) { + new_entry->original_name = saved_original_name; + new_entry->aliased_name = saved_aliased_name; + list_add_tail(&new_entry->list, &tomoyo_alias_list); + new_entry = NULL; + error = 0; } - new_entry = tomoyo_alloc_element(sizeof(*new_entry)); - if (!new_entry) - goto out; - new_entry->original_name = saved_original_name; - new_entry->aliased_name = saved_aliased_name; - list_add_tail(&new_entry->list, &tomoyo_alias_list); - error = 0; - out: up_write(&tomoyo_alias_list_lock); + kfree(new_entry); return error; } @@ -729,17 +702,19 @@ struct tomoyo_domain_info *tomoyo_find_o domainname, const u8 profile) { - struct tomoyo_domain_info *domain = NULL; + struct tomoyo_domain_info *new_domain = NULL; + struct tomoyo_domain_info *domain; const struct tomoyo_path_info *saved_domainname; - down_write(&tomoyo_domain_list_lock); - domain = tomoyo_find_domain(domainname); - if (domain) - goto out; if (!tomoyo_is_correct_domain(domainname, __func__)) - goto out; + return NULL; saved_domainname = tomoyo_save_name(domainname); if (!saved_domainname) + return NULL; + new_domain = kmalloc(sizeof(*new_domain), GFP_KERNEL); + down_write(&tomoyo_domain_list_lock); + domain = tomoyo_find_domain(domainname); + if (domain) goto out; /* Can I reuse memory of deleted domain? */ list_for_each_entry(domain, &tomoyo_domain_list, list) { @@ -763,7 +738,8 @@ struct tomoyo_domain_info *tomoyo_find_o list_for_each_entry(ptr, &domain->acl_info_list, list) { ptr->type |= TOMOYO_ACL_DELETED; } - tomoyo_set_domain_flag(domain, true, domain->flags); + domain->ignore_global_allow_read = false; + domain->domain_transition_failed = false; domain->profile = profile; domain->quota_warned = false; mb(); /* Avoid out-of-order execution. */ @@ -771,8 +747,9 @@ struct tomoyo_domain_info *tomoyo_find_o goto out; } /* No memory reusable. Create using new memory. */ - domain = tomoyo_alloc_element(sizeof(*domain)); - if (domain) { + if (tomoyo_memory_ok(new_domain)) { + domain = new_domain; + new_domain = NULL; INIT_LIST_HEAD(&domain->acl_info_list); domain->domainname = saved_domainname; domain->profile = profile; @@ -780,6 +757,7 @@ struct tomoyo_domain_info *tomoyo_find_o } out: up_write(&tomoyo_domain_list_lock); + kfree(new_domain); return domain; } @@ -909,8 +887,7 @@ int tomoyo_find_next_domain(struct linux if (is_enforce) retval = -EPERM; else - tomoyo_set_domain_flag(old_domain, false, - TOMOYO_DOMAIN_FLAGS_TRANSITION_FAILED); + old_domain->domain_transition_failed = true; out: if (!domain) domain = old_domain; --- security-testing-2.6.git.orig/security/tomoyo/file.c +++ security-testing-2.6.git/security/tomoyo/file.c @@ -209,36 +209,34 @@ static DECLARE_RWSEM(tomoyo_globally_rea static int tomoyo_update_globally_readable_entry(const char *filename, const bool is_delete) { - struct tomoyo_globally_readable_file_entry *new_entry; + struct tomoyo_globally_readable_file_entry *new_entry = NULL; struct tomoyo_globally_readable_file_entry *ptr; const struct tomoyo_path_info *saved_filename; - int error = -ENOMEM; + int error = is_delete ? -ENOENT : -ENOMEM; if (!tomoyo_is_correct_path(filename, 1, 0, -1, __func__)) return -EINVAL; saved_filename = tomoyo_save_name(filename); if (!saved_filename) return -ENOMEM; + if (!is_delete) + new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); down_write(&tomoyo_globally_readable_list_lock); list_for_each_entry(ptr, &tomoyo_globally_readable_list, list) { if (ptr->filename != saved_filename) continue; ptr->is_deleted = is_delete; error = 0; - goto out; + break; } - if (is_delete) { - error = -ENOENT; - goto out; + if (!is_delete && error && tomoyo_memory_ok(new_entry)) { + new_entry->filename = saved_filename; + list_add_tail(&new_entry->list, &tomoyo_globally_readable_list); + new_entry = NULL; + error = 0; } - new_entry = tomoyo_alloc_element(sizeof(*new_entry)); - if (!new_entry) - goto out; - new_entry->filename = saved_filename; - list_add_tail(&new_entry->list, &tomoyo_globally_readable_list); - error = 0; - out: up_write(&tomoyo_globally_readable_list_lock); + kfree(new_entry); return error; } @@ -352,36 +350,34 @@ static DECLARE_RWSEM(tomoyo_pattern_list static int tomoyo_update_file_pattern_entry(const char *pattern, const bool is_delete) { - struct tomoyo_pattern_entry *new_entry; + struct tomoyo_pattern_entry *new_entry = NULL; struct tomoyo_pattern_entry *ptr; const struct tomoyo_path_info *saved_pattern; - int error = -ENOMEM; + int error = is_delete ? -ENOENT : -ENOMEM; if (!tomoyo_is_correct_path(pattern, 0, 1, 0, __func__)) return -EINVAL; saved_pattern = tomoyo_save_name(pattern); if (!saved_pattern) return -ENOMEM; + if (!is_delete) + new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); down_write(&tomoyo_pattern_list_lock); list_for_each_entry(ptr, &tomoyo_pattern_list, list) { if (saved_pattern != ptr->pattern) continue; ptr->is_deleted = is_delete; error = 0; - goto out; + break; } - if (is_delete) { - error = -ENOENT; - goto out; + if (!is_delete && error && tomoyo_memory_ok(new_entry)) { + new_entry->pattern = saved_pattern; + list_add_tail(&new_entry->list, &tomoyo_pattern_list); + new_entry = NULL; + error = 0; } - new_entry = tomoyo_alloc_element(sizeof(*new_entry)); - if (!new_entry) - goto out; - new_entry->pattern = saved_pattern; - list_add_tail(&new_entry->list, &tomoyo_pattern_list); - error = 0; - out: up_write(&tomoyo_pattern_list_lock); + kfree(new_entry); return error; } @@ -501,34 +497,32 @@ static DECLARE_RWSEM(tomoyo_no_rewrite_l static int tomoyo_update_no_rewrite_entry(const char *pattern, const bool is_delete) { - struct tomoyo_no_rewrite_entry *new_entry, *ptr; + struct tomoyo_no_rewrite_entry *new_entry = NULL; + struct tomoyo_no_rewrite_entry *ptr; const struct tomoyo_path_info *saved_pattern; - int error = -ENOMEM; + int error = is_delete ? -ENOENT : -ENOMEM; if (!tomoyo_is_correct_path(pattern, 0, 0, 0, __func__)) return -EINVAL; saved_pattern = tomoyo_save_name(pattern); if (!saved_pattern) return -ENOMEM; + if (!is_delete) + new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); down_write(&tomoyo_no_rewrite_list_lock); list_for_each_entry(ptr, &tomoyo_no_rewrite_list, list) { if (ptr->pattern != saved_pattern) continue; ptr->is_deleted = is_delete; error = 0; - goto out; + break; } - if (is_delete) { - error = -ENOENT; - goto out; + if (!is_delete && error && tomoyo_memory_ok(new_entry)) { + new_entry->pattern = saved_pattern; + list_add_tail(&new_entry->list, &tomoyo_no_rewrite_list); + new_entry = NULL; + error = 0; } - new_entry = tomoyo_alloc_element(sizeof(*new_entry)); - if (!new_entry) - goto out; - new_entry->pattern = saved_pattern; - list_add_tail(&new_entry->list, &tomoyo_no_rewrite_list); - error = 0; - out: up_write(&tomoyo_no_rewrite_list_lock); return error; } @@ -738,8 +732,7 @@ static int tomoyo_check_file_perm2(struc if (!filename) return 0; error = tomoyo_check_file_acl(domain, filename, perm); - if (error && perm == 4 && - (domain->flags & TOMOYO_DOMAIN_FLAGS_IGNORE_GLOBAL_ALLOW_READ) == 0 + if (error && perm == 4 && !domain->ignore_global_allow_read && tomoyo_is_globally_readable_file(filename)) error = 0; if (perm == 6) @@ -834,8 +827,8 @@ static int tomoyo_update_single_path_acl (1 << TOMOYO_TYPE_READ_ACL) | (1 << TOMOYO_TYPE_WRITE_ACL); const struct tomoyo_path_info *saved_filename; struct tomoyo_acl_info *ptr; - struct tomoyo_single_path_acl_record *acl; - int error = -ENOMEM; + struct tomoyo_single_path_acl_record *new_entry = NULL; + int error = is_delete ? -ENOENT : -ENOMEM; const u16 perm = 1 << type; if (!domain) @@ -845,10 +838,13 @@ static int tomoyo_update_single_path_acl saved_filename = tomoyo_save_name(filename); if (!saved_filename) return -ENOMEM; + if (!is_delete) + new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); down_write(&tomoyo_domain_acl_info_list_lock); if (is_delete) goto delete; list_for_each_entry(ptr, &domain->acl_info_list, list) { + struct tomoyo_single_path_acl_record *acl; if (tomoyo_acl_type1(ptr) != TOMOYO_TYPE_SINGLE_PATH_ACL) continue; acl = container_of(ptr, struct tomoyo_single_path_acl_record, @@ -865,22 +861,23 @@ static int tomoyo_update_single_path_acl acl->perm |= rw_mask; ptr->type &= ~TOMOYO_ACL_DELETED; error = 0; - goto out; + break; } /* Not found. Append it to the tail. */ - acl = tomoyo_alloc_acl_element(TOMOYO_TYPE_SINGLE_PATH_ACL); - if (!acl) - goto out; - acl->perm = perm; - if (perm == (1 << TOMOYO_TYPE_READ_WRITE_ACL)) - acl->perm |= rw_mask; - acl->filename = saved_filename; - list_add_tail(&acl->head.list, &domain->acl_info_list); - error = 0; + if (error && tomoyo_memory_ok(new_entry)) { + new_entry->head.type = TOMOYO_TYPE_SINGLE_PATH_ACL; + new_entry->perm = perm; + if (perm == (1 << TOMOYO_TYPE_READ_WRITE_ACL)) + new_entry->perm |= rw_mask; + new_entry->filename = saved_filename; + list_add_tail(&new_entry->head.list, &domain->acl_info_list); + new_entry = NULL; + error = 0; + } goto out; delete: - error = -ENOENT; list_for_each_entry(ptr, &domain->acl_info_list, list) { + struct tomoyo_single_path_acl_record *acl; if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_SINGLE_PATH_ACL) continue; acl = container_of(ptr, struct tomoyo_single_path_acl_record, @@ -899,6 +896,7 @@ static int tomoyo_update_single_path_acl } out: up_write(&tomoyo_domain_acl_info_list_lock); + kfree(new_entry); return error; } @@ -921,8 +919,8 @@ static int tomoyo_update_double_path_acl const struct tomoyo_path_info *saved_filename1; const struct tomoyo_path_info *saved_filename2; struct tomoyo_acl_info *ptr; - struct tomoyo_double_path_acl_record *acl; - int error = -ENOMEM; + struct tomoyo_double_path_acl_record *new_entry = NULL; + int error = is_delete ? -ENOENT : -ENOMEM; const u8 perm = 1 << type; if (!domain) @@ -934,10 +932,13 @@ static int tomoyo_update_double_path_acl saved_filename2 = tomoyo_save_name(filename2); if (!saved_filename1 || !saved_filename2) return -ENOMEM; + if (!is_delete) + new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); down_write(&tomoyo_domain_acl_info_list_lock); if (is_delete) goto delete; list_for_each_entry(ptr, &domain->acl_info_list, list) { + struct tomoyo_double_path_acl_record *acl; if (tomoyo_acl_type1(ptr) != TOMOYO_TYPE_DOUBLE_PATH_ACL) continue; acl = container_of(ptr, struct tomoyo_double_path_acl_record, @@ -951,21 +952,22 @@ static int tomoyo_update_double_path_acl acl->perm |= perm; ptr->type &= ~TOMOYO_ACL_DELETED; error = 0; - goto out; + break; } /* Not found. Append it to the tail. */ - acl = tomoyo_alloc_acl_element(TOMOYO_TYPE_DOUBLE_PATH_ACL); - if (!acl) - goto out; - acl->perm = perm; - acl->filename1 = saved_filename1; - acl->filename2 = saved_filename2; - list_add_tail(&acl->head.list, &domain->acl_info_list); - error = 0; + if (error && tomoyo_memory_ok(new_entry)) { + new_entry->head.type = TOMOYO_TYPE_DOUBLE_PATH_ACL; + new_entry->perm = perm; + new_entry->filename1 = saved_filename1; + new_entry->filename2 = saved_filename2; + list_add_tail(&new_entry->head.list, &domain->acl_info_list); + new_entry = NULL; + error = 0; + } goto out; delete: - error = -ENOENT; list_for_each_entry(ptr, &domain->acl_info_list, list) { + struct tomoyo_double_path_acl_record *acl; if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_DOUBLE_PATH_ACL) continue; acl = container_of(ptr, struct tomoyo_double_path_acl_record, @@ -981,6 +983,7 @@ static int tomoyo_update_double_path_acl } out: up_write(&tomoyo_domain_acl_info_list_lock); + kfree(new_entry); return error; } --- security-testing-2.6.git.orig/security/tomoyo/realpath.c +++ security-testing-2.6.git/security/tomoyo/realpath.c @@ -195,66 +195,36 @@ char *tomoyo_realpath_nofollow(const cha } /* Memory allocated for non-string data. */ -static unsigned int tomoyo_allocated_memory_for_elements; +static atomic_t tomoyo_allocated_memory_for_elements; /* Quota for holding non-string data. */ static unsigned int tomoyo_quota_for_elements; /** - * tomoyo_alloc_element - Allocate permanent memory for structures. + * tomoyo_memory_ok - Check memory quota. * - * @size: Size in bytes. - * - * Returns pointer to allocated memory on success, NULL otherwise. + * @ptr: Pointer to allocated memory. * - * Memory has to be zeroed. - * The RAM is chunked, so NEVER try to kfree() the returned pointer. + * Returns true if @ptr is not NULL and quota not exceeded, false otehrwise. */ -void *tomoyo_alloc_element(const unsigned int size) +bool tomoyo_memory_ok(void *ptr) { - static char *buf; - static DEFINE_MUTEX(lock); - static unsigned int buf_used_len = PATH_MAX; - char *ptr = NULL; - /*Assumes sizeof(void *) >= sizeof(long) is true. */ - const unsigned int word_aligned_size - = roundup(size, max(sizeof(void *), sizeof(long))); - if (word_aligned_size > PATH_MAX) - return NULL; - mutex_lock(&lock); - if (buf_used_len + word_aligned_size > PATH_MAX) { - if (!tomoyo_quota_for_elements || - tomoyo_allocated_memory_for_elements - + PATH_MAX <= tomoyo_quota_for_elements) - ptr = kzalloc(PATH_MAX, GFP_KERNEL); - if (!ptr) { - printk(KERN_WARNING "ERROR: Out of memory " - "for tomoyo_alloc_element().\n"); - if (!tomoyo_policy_loaded) - panic("MAC Initialization failed.\n"); - } else { - buf = ptr; - tomoyo_allocated_memory_for_elements += PATH_MAX; - buf_used_len = word_aligned_size; - ptr = buf; - } - } else if (word_aligned_size) { - int i; - ptr = buf + buf_used_len; - buf_used_len += word_aligned_size; - for (i = 0; i < word_aligned_size; i++) { - if (!ptr[i]) - continue; - printk(KERN_ERR "WARNING: Reserved memory was tainted! " - "The system might go wrong.\n"); - ptr[i] = '\0'; - } - } - mutex_unlock(&lock); - return ptr; + const int len = ptr ? ksize(ptr) : 0; + atomic_add(len, &tomoyo_allocated_memory_for_elements); + if (len && (!tomoyo_quota_for_elements || + atomic_read(&tomoyo_allocated_memory_for_elements) + <= tomoyo_quota_for_elements)) { + memset(ptr, 0, len); + return true; + } + atomic_sub(len, &tomoyo_allocated_memory_for_elements); + printk(KERN_WARNING "ERROR: Out of memory. (%s)\n", __func__); + if (!tomoyo_policy_loaded) + panic("MAC Initialization failed.\n"); + return false; } /* Memory allocated for string data in bytes. */ -static unsigned int tomoyo_allocated_memory_for_savename; +static atomic_t tomoyo_allocated_memory_for_savename; /* Quota for holding string data in bytes. */ static unsigned int tomoyo_quota_for_savename; @@ -280,13 +250,6 @@ struct tomoyo_name_entry { struct tomoyo_path_info entry; }; -/* Structure for available memory region. */ -struct tomoyo_free_memory_block_list { - struct list_head list; - char *ptr; /* Pointer to a free area. */ - int len; /* Length of the area. */ -}; - /* * tomoyo_name_list is used for holding string data used by TOMOYO. * Since same string data is likely used for multiple times (e.g. @@ -294,6 +257,7 @@ struct tomoyo_free_memory_block_list { * "const struct tomoyo_path_info *". */ static struct list_head tomoyo_name_list[TOMOYO_MAX_HASH]; +static DEFINE_MUTEX(tomoyo_name_list_lock); /** * tomoyo_save_name - Allocate permanent memory for string data. @@ -306,73 +270,54 @@ static struct list_head tomoyo_name_list */ const struct tomoyo_path_info *tomoyo_save_name(const char *name) { - static LIST_HEAD(fmb_list); - static DEFINE_MUTEX(lock); + struct tomoyo_name_entry *entry; struct tomoyo_name_entry *ptr; unsigned int hash; - /* fmb contains available size in bytes. - fmb is removed from the fmb_list when fmb->len becomes 0. */ - struct tomoyo_free_memory_block_list *fmb; - int len; - char *cp; + const int len = name ? strlen(name) + 1 : 0; + int allocated_len; + int error = -ENOMEM; - if (!name) + if (!len) return NULL; - len = strlen(name) + 1; if (len > TOMOYO_MAX_PATHNAME_LEN) { - printk(KERN_WARNING "ERROR: Name too long " - "for tomoyo_save_name().\n"); + printk(KERN_WARNING "ERROR: Name too long. (%s)\n", __func__); return NULL; } hash = full_name_hash((const unsigned char *) name, len - 1); - mutex_lock(&lock); + entry = kmalloc(sizeof(*entry) + len, GFP_KERNEL); + allocated_len = entry ? ksize(entry) : 0; + mutex_lock(&tomoyo_name_list_lock); list_for_each_entry(ptr, &tomoyo_name_list[hash % TOMOYO_MAX_HASH], - list) { - if (hash == ptr->entry.hash && !strcmp(name, ptr->entry.name)) - goto out; - } - list_for_each_entry(fmb, &fmb_list, list) { - if (len <= fmb->len) - goto ready; - } - if (!tomoyo_quota_for_savename || - tomoyo_allocated_memory_for_savename + PATH_MAX - <= tomoyo_quota_for_savename) - cp = kzalloc(PATH_MAX, GFP_KERNEL); - else - cp = NULL; - fmb = kzalloc(sizeof(*fmb), GFP_KERNEL); - if (!cp || !fmb) { - kfree(cp); - kfree(fmb); - printk(KERN_WARNING "ERROR: Out of memory " - "for tomoyo_save_name().\n"); - if (!tomoyo_policy_loaded) - panic("MAC Initialization failed.\n"); - ptr = NULL; - goto out; - } - tomoyo_allocated_memory_for_savename += PATH_MAX; - list_add(&fmb->list, &fmb_list); - fmb->ptr = cp; - fmb->len = PATH_MAX; - ready: - ptr = tomoyo_alloc_element(sizeof(*ptr)); - if (!ptr) - goto out; - ptr->entry.name = fmb->ptr; - memmove(fmb->ptr, name, len); - tomoyo_fill_path_info(&ptr->entry); - fmb->ptr += len; - fmb->len -= len; - list_add_tail(&ptr->list, &tomoyo_name_list[hash % TOMOYO_MAX_HASH]); - if (fmb->len == 0) { - list_del(&fmb->list); - kfree(fmb); - } - out: - mutex_unlock(&lock); - return ptr ? &ptr->entry : NULL; + list) { + if (hash != ptr->entry.hash || strcmp(name, ptr->entry.name)) + continue; + error = 0; + break; + } + if (error && entry && + (!tomoyo_quota_for_savename || + atomic_read(&tomoyo_allocated_memory_for_savename) + allocated_len + <= tomoyo_quota_for_savename)) { + atomic_add(allocated_len, + &tomoyo_allocated_memory_for_savename); + ptr = entry; + memset(ptr, 0, sizeof(*ptr)); + ptr->entry.name = ((char *) ptr) + sizeof(*ptr); + memmove((char *) ptr->entry.name, name, len); + tomoyo_fill_path_info(&ptr->entry); + list_add_tail(&ptr->list, + &tomoyo_name_list[hash % TOMOYO_MAX_HASH]); + entry = NULL; + error = 0; + } + mutex_unlock(&tomoyo_name_list_lock); + kfree(entry); + if (!error) + return &ptr->entry; + printk(KERN_WARNING "ERROR: Out of memory. (%s)\n", __func__); + if (!tomoyo_policy_loaded) + panic("MAC Initialization failed.\n"); + return NULL; } /** @@ -438,9 +383,9 @@ int tomoyo_read_memory_counter(struct to { if (!head->read_eof) { const unsigned int shared - = tomoyo_allocated_memory_for_savename; + = atomic_read(&tomoyo_allocated_memory_for_savename); const unsigned int private - = tomoyo_allocated_memory_for_elements; + = atomic_read(&tomoyo_allocated_memory_for_elements); const unsigned int dynamic = atomic_read(&tomoyo_dynamic_memory_size); char buffer[64]; --- security-testing-2.6.git.orig/security/tomoyo/realpath.h +++ security-testing-2.6.git/security/tomoyo/realpath.h @@ -36,11 +36,8 @@ char *tomoyo_realpath_nofollow(const cha /* Same with tomoyo_realpath() except that the pathname is already solved. */ char *tomoyo_realpath_from_path(struct path *path); -/* - * Allocate memory for ACL entry. - * The RAM is chunked, so NEVER try to kfree() the returned pointer. - */ -void *tomoyo_alloc_element(const unsigned int size); +/* Check memory quota. */ +bool tomoyo_memory_ok(void *ptr); /* * Keep the given name on the RAM. ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 2/3] TOMOYO: Replace tomoyo_save_name() with tomoyo_get_name()/tomoyo_put_name(). 2009-06-17 11:19 [PATCH] TOMOYO: Add garbage collector support. (v3) Tetsuo Handa 2009-06-17 11:21 ` [PATCH 1/3] TOMOYO: Move sleeping operations to outside the semaphore Tetsuo Handa @ 2009-06-17 11:22 ` Tetsuo Handa 2009-06-17 11:23 ` [PATCH 3/3] TOMOYO: Add RCU-like garbage collector Tetsuo Handa ` (2 subsequent siblings) 4 siblings, 0 replies; 16+ messages in thread From: Tetsuo Handa @ 2009-06-17 11:22 UTC (permalink / raw) To: linux-security-module, linux-kernel Replace tomoyo_save_name() with tomoyo_get_name() and tomoyo_put_name(). This is preparation for implementing GC support. Since refcounter is not added yet, tomoyo_put_name() is a no-op. Signed-off-by: Kentaro Takeda <takedakn@nttdata.co.jp> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> --- security/tomoyo/common.c | 11 +++++++++-- security/tomoyo/domain.c | 41 +++++++++++++++++++++++++++++++---------- security/tomoyo/file.c | 29 ++++++++++++++++++++++------- security/tomoyo/realpath.c | 6 +++--- security/tomoyo/realpath.h | 6 +++++- 5 files changed, 70 insertions(+), 23 deletions(-) --- security-testing-2.6.git.orig/security/tomoyo/common.c +++ security-testing-2.6.git/security/tomoyo/common.c @@ -929,7 +929,12 @@ static int tomoyo_write_profile(struct t return -EINVAL; *cp = '\0'; if (!strcmp(data, "COMMENT")) { - profile->comment = tomoyo_save_name(cp + 1); + const struct tomoyo_path_info *new_comment + = tomoyo_get_name(cp + 1); + const struct tomoyo_path_info *old_comment; + old_comment = profile->comment; + profile->comment = new_comment; + tomoyo_put_name(old_comment); return 0; } for (i = 0; i < TOMOYO_MAX_CONTROL_INDEX; i++) { @@ -1102,7 +1107,7 @@ static int tomoyo_update_manager_entry(c if (!tomoyo_is_correct_path(manager, 1, -1, -1, __func__)) return -EINVAL; } - saved_manager = tomoyo_save_name(manager); + saved_manager = tomoyo_get_name(manager); if (!saved_manager) return -ENOMEM; if (!is_delete) @@ -1117,12 +1122,14 @@ static int tomoyo_update_manager_entry(c } if (!is_delete && error && tomoyo_memory_ok(new_entry)) { new_entry->manager = saved_manager; + saved_manager = NULL; new_entry->is_domain = is_domain; list_add_tail(&new_entry->list, &tomoyo_policy_manager_list); new_entry = NULL; error = 0; } up_write(&tomoyo_policy_manager_list_lock); + tomoyo_put_name(saved_manager); kfree(new_entry); return error; } --- security-testing-2.6.git.orig/security/tomoyo/domain.c +++ security-testing-2.6.git/security/tomoyo/domain.c @@ -216,13 +216,15 @@ static int tomoyo_update_domain_initiali is_last_name = true; else if (!tomoyo_is_correct_domain(domainname, __func__)) return -EINVAL; - saved_domainname = tomoyo_save_name(domainname); + saved_domainname = tomoyo_get_name(domainname); if (!saved_domainname) return -ENOMEM; } - saved_program = tomoyo_save_name(program); - if (!saved_program) + saved_program = tomoyo_get_name(program); + if (!saved_program) { + tomoyo_put_name(saved_domainname); return -ENOMEM; + } if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); down_write(&tomoyo_domain_initializer_list_lock); @@ -237,7 +239,9 @@ static int tomoyo_update_domain_initiali } if (!is_delete && error && tomoyo_memory_ok(new_entry)) { new_entry->domainname = saved_domainname; + saved_domainname = NULL; new_entry->program = saved_program; + saved_program = NULL; new_entry->is_not = is_not; new_entry->is_last_name = is_last_name; list_add_tail(&new_entry->list, @@ -246,6 +250,8 @@ static int tomoyo_update_domain_initiali error = 0; } up_write(&tomoyo_domain_initializer_list_lock); + tomoyo_put_name(saved_domainname); + tomoyo_put_name(saved_program); kfree(new_entry); return error; } @@ -428,13 +434,15 @@ static int tomoyo_update_domain_keeper_e if (program) { if (!tomoyo_is_correct_path(program, 1, -1, -1, __func__)) return -EINVAL; - saved_program = tomoyo_save_name(program); + saved_program = tomoyo_get_name(program); if (!saved_program) return -ENOMEM; } - saved_domainname = tomoyo_save_name(domainname); - if (!saved_domainname) + saved_domainname = tomoyo_get_name(domainname); + if (!saved_domainname) { + tomoyo_put_name(saved_program); return -ENOMEM; + } if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); down_write(&tomoyo_domain_keeper_list_lock); @@ -449,7 +457,9 @@ static int tomoyo_update_domain_keeper_e } if (!is_delete && error && tomoyo_memory_ok(new_entry)) { new_entry->domainname = saved_domainname; + saved_domainname = NULL; new_entry->program = saved_program; + saved_program = NULL; new_entry->is_not = is_not; new_entry->is_last_name = is_last_name; list_add_tail(&new_entry->list, &tomoyo_domain_keeper_list); @@ -457,6 +467,8 @@ static int tomoyo_update_domain_keeper_e error = 0; } up_write(&tomoyo_domain_keeper_list_lock); + tomoyo_put_name(saved_domainname); + tomoyo_put_name(saved_program); kfree(new_entry); return error; } @@ -616,10 +628,13 @@ static int tomoyo_update_alias_entry(con if (!tomoyo_is_correct_path(original_name, 1, -1, -1, __func__) || !tomoyo_is_correct_path(aliased_name, 1, -1, -1, __func__)) return -EINVAL; /* No patterns allowed. */ - saved_original_name = tomoyo_save_name(original_name); - saved_aliased_name = tomoyo_save_name(aliased_name); - if (!saved_original_name || !saved_aliased_name) + saved_original_name = tomoyo_get_name(original_name); + saved_aliased_name = tomoyo_get_name(aliased_name); + if (!saved_original_name || !saved_aliased_name) { + tomoyo_put_name(saved_original_name); + tomoyo_put_name(saved_aliased_name); return -ENOMEM; + } if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); down_write(&tomoyo_alias_list_lock); @@ -633,12 +648,16 @@ static int tomoyo_update_alias_entry(con } if (!is_delete && error && tomoyo_memory_ok(new_entry)) { new_entry->original_name = saved_original_name; + saved_original_name = NULL; new_entry->aliased_name = saved_aliased_name; + saved_aliased_name = NULL; list_add_tail(&new_entry->list, &tomoyo_alias_list); new_entry = NULL; error = 0; } up_write(&tomoyo_alias_list_lock); + tomoyo_put_name(saved_original_name); + tomoyo_put_name(saved_aliased_name); kfree(new_entry); return error; } @@ -708,7 +727,7 @@ struct tomoyo_domain_info *tomoyo_find_o if (!tomoyo_is_correct_domain(domainname, __func__)) return NULL; - saved_domainname = tomoyo_save_name(domainname); + saved_domainname = tomoyo_get_name(domainname); if (!saved_domainname) return NULL; new_domain = kmalloc(sizeof(*new_domain), GFP_KERNEL); @@ -752,11 +771,13 @@ struct tomoyo_domain_info *tomoyo_find_o new_domain = NULL; INIT_LIST_HEAD(&domain->acl_info_list); domain->domainname = saved_domainname; + saved_domainname = NULL; domain->profile = profile; list_add_tail(&domain->list, &tomoyo_domain_list); } out: up_write(&tomoyo_domain_list_lock); + tomoyo_put_name(saved_domainname); kfree(new_domain); return domain; } --- security-testing-2.6.git.orig/security/tomoyo/file.c +++ security-testing-2.6.git/security/tomoyo/file.c @@ -216,7 +216,7 @@ static int tomoyo_update_globally_readab if (!tomoyo_is_correct_path(filename, 1, 0, -1, __func__)) return -EINVAL; - saved_filename = tomoyo_save_name(filename); + saved_filename = tomoyo_get_name(filename); if (!saved_filename) return -ENOMEM; if (!is_delete) @@ -231,11 +231,13 @@ static int tomoyo_update_globally_readab } if (!is_delete && error && tomoyo_memory_ok(new_entry)) { new_entry->filename = saved_filename; + saved_filename = NULL; list_add_tail(&new_entry->list, &tomoyo_globally_readable_list); new_entry = NULL; error = 0; } up_write(&tomoyo_globally_readable_list_lock); + tomoyo_put_name(saved_filename); kfree(new_entry); return error; } @@ -357,7 +359,7 @@ static int tomoyo_update_file_pattern_en if (!tomoyo_is_correct_path(pattern, 0, 1, 0, __func__)) return -EINVAL; - saved_pattern = tomoyo_save_name(pattern); + saved_pattern = tomoyo_get_name(pattern); if (!saved_pattern) return -ENOMEM; if (!is_delete) @@ -372,11 +374,13 @@ static int tomoyo_update_file_pattern_en } if (!is_delete && error && tomoyo_memory_ok(new_entry)) { new_entry->pattern = saved_pattern; + saved_pattern = NULL; list_add_tail(&new_entry->list, &tomoyo_pattern_list); new_entry = NULL; error = 0; } up_write(&tomoyo_pattern_list_lock); + tomoyo_put_name(saved_pattern); kfree(new_entry); return error; } @@ -504,7 +508,7 @@ static int tomoyo_update_no_rewrite_entr if (!tomoyo_is_correct_path(pattern, 0, 0, 0, __func__)) return -EINVAL; - saved_pattern = tomoyo_save_name(pattern); + saved_pattern = tomoyo_get_name(pattern); if (!saved_pattern) return -ENOMEM; if (!is_delete) @@ -519,11 +523,13 @@ static int tomoyo_update_no_rewrite_entr } if (!is_delete && error && tomoyo_memory_ok(new_entry)) { new_entry->pattern = saved_pattern; + saved_pattern = NULL; list_add_tail(&new_entry->list, &tomoyo_no_rewrite_list); new_entry = NULL; error = 0; } up_write(&tomoyo_no_rewrite_list_lock); + tomoyo_put_name(saved_pattern); return error; } @@ -835,7 +841,7 @@ static int tomoyo_update_single_path_acl return -EINVAL; if (!tomoyo_is_correct_path(filename, 0, 0, 0, __func__)) return -EINVAL; - saved_filename = tomoyo_save_name(filename); + saved_filename = tomoyo_get_name(filename); if (!saved_filename) return -ENOMEM; if (!is_delete) @@ -870,6 +876,7 @@ static int tomoyo_update_single_path_acl if (perm == (1 << TOMOYO_TYPE_READ_WRITE_ACL)) new_entry->perm |= rw_mask; new_entry->filename = saved_filename; + saved_filename = NULL; list_add_tail(&new_entry->head.list, &domain->acl_info_list); new_entry = NULL; error = 0; @@ -896,6 +903,7 @@ static int tomoyo_update_single_path_acl } out: up_write(&tomoyo_domain_acl_info_list_lock); + tomoyo_put_name(saved_filename); kfree(new_entry); return error; } @@ -928,10 +936,13 @@ static int tomoyo_update_double_path_acl if (!tomoyo_is_correct_path(filename1, 0, 0, 0, __func__) || !tomoyo_is_correct_path(filename2, 0, 0, 0, __func__)) return -EINVAL; - saved_filename1 = tomoyo_save_name(filename1); - saved_filename2 = tomoyo_save_name(filename2); - if (!saved_filename1 || !saved_filename2) + saved_filename1 = tomoyo_get_name(filename1); + saved_filename2 = tomoyo_get_name(filename2); + if (!saved_filename1 || !saved_filename2) { + tomoyo_put_name(saved_filename1); + tomoyo_put_name(saved_filename2); return -ENOMEM; + } if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); down_write(&tomoyo_domain_acl_info_list_lock); @@ -959,7 +970,9 @@ static int tomoyo_update_double_path_acl new_entry->head.type = TOMOYO_TYPE_DOUBLE_PATH_ACL; new_entry->perm = perm; new_entry->filename1 = saved_filename1; + saved_filename1 = NULL; new_entry->filename2 = saved_filename2; + saved_filename2 = NULL; list_add_tail(&new_entry->head.list, &domain->acl_info_list); new_entry = NULL; error = 0; @@ -983,6 +996,8 @@ static int tomoyo_update_double_path_acl } out: up_write(&tomoyo_domain_acl_info_list_lock); + tomoyo_put_name(saved_filename1); + tomoyo_put_name(saved_filename2); kfree(new_entry); return error; } --- security-testing-2.6.git.orig/security/tomoyo/realpath.c +++ security-testing-2.6.git/security/tomoyo/realpath.c @@ -260,7 +260,7 @@ static struct list_head tomoyo_name_list static DEFINE_MUTEX(tomoyo_name_list_lock); /** - * tomoyo_save_name - Allocate permanent memory for string data. + * tomoyo_get_name - Allocate permanent memory for string data. * * @name: The string to store into the permernent memory. * @@ -268,7 +268,7 @@ static DEFINE_MUTEX(tomoyo_name_list_loc * * The RAM is shared, so NEVER try to modify or kfree() the returned name. */ -const struct tomoyo_path_info *tomoyo_save_name(const char *name) +const struct tomoyo_path_info *tomoyo_get_name(const char *name) { struct tomoyo_name_entry *entry; struct tomoyo_name_entry *ptr; @@ -331,7 +331,7 @@ void __init tomoyo_realpath_init(void) for (i = 0; i < TOMOYO_MAX_HASH; i++) INIT_LIST_HEAD(&tomoyo_name_list[i]); INIT_LIST_HEAD(&tomoyo_kernel_domain.acl_info_list); - tomoyo_kernel_domain.domainname = tomoyo_save_name(TOMOYO_ROOT_NAME); + tomoyo_kernel_domain.domainname = tomoyo_get_name(TOMOYO_ROOT_NAME); list_add_tail(&tomoyo_kernel_domain.list, &tomoyo_domain_list); down_read(&tomoyo_domain_list_lock); if (tomoyo_find_domain(TOMOYO_ROOT_NAME) != &tomoyo_kernel_domain) --- security-testing-2.6.git.orig/security/tomoyo/realpath.h +++ security-testing-2.6.git/security/tomoyo/realpath.h @@ -43,7 +43,11 @@ bool tomoyo_memory_ok(void *ptr); * Keep the given name on the RAM. * The RAM is shared, so NEVER try to modify or kfree() the returned name. */ -const struct tomoyo_path_info *tomoyo_save_name(const char *name); +const struct tomoyo_path_info *tomoyo_get_name(const char *name); +static inline void tomoyo_put_name(const struct tomoyo_path_info *name) +{ + /* It's a dummy so far. */ +} /* Allocate memory for temporary use (e.g. permission checks). */ void *tomoyo_alloc(const size_t size); ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 3/3] TOMOYO: Add RCU-like garbage collector. 2009-06-17 11:19 [PATCH] TOMOYO: Add garbage collector support. (v3) Tetsuo Handa 2009-06-17 11:21 ` [PATCH 1/3] TOMOYO: Move sleeping operations to outside the semaphore Tetsuo Handa 2009-06-17 11:22 ` [PATCH 2/3] TOMOYO: Replace tomoyo_save_name() with tomoyo_get_name()/tomoyo_put_name() Tetsuo Handa @ 2009-06-17 11:23 ` Tetsuo Handa 2009-06-17 12:28 ` [PATCH] TOMOYO: Add garbage collector support. (v3) Peter Zijlstra 2009-06-17 16:31 ` Paul E. McKenney 4 siblings, 0 replies; 16+ messages in thread From: Tetsuo Handa @ 2009-06-17 11:23 UTC (permalink / raw) To: linux-security-module, linux-kernel; +Cc: paulmck As of now, TOMOYO cannot release memory used by marked-as-deleted list elements because TOMOYO does not know how many readers are there. This patch adds "atomic_t users" to "struct tomoyo_domain_info" and "struct tomoyo_name_entry" structures and adds global counter "atomic_t tomoyo_users_counter[2]" and active counter indicator "tomoyo_users_counter_idx". Reader threads do "idx = atomic_read(&tomoyo_users_counter_idx);" and "atomic_inc(&tomoyo_users_counter[idx]);" before start reading, and do "atomic_dec(&tomoyo_users_counter[idx]);" after finished reading. The garbage collector thread removes marked-as-deleted elements using list_del_rcu(). Then, GC updates "tomoyo_users_counter_idx" so that subsequent readers shall use the other global counter. Then, GC waits for previously used global counter to become 0, which indicates that RCU's grace period has expired. To be able to release all marked-as-deleted elements with single RCU grace period, GC temporarily stores marked-as-deleted elements in a private list. Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> --- security/tomoyo/common.c | 127 ++++++-------- security/tomoyo/common.h | 192 +++++++++++++++++++++- security/tomoyo/domain.c | 191 ++++------------------ security/tomoyo/file.c | 174 +++++++------------- security/tomoyo/realpath.c | 384 +++++++++++++++++++++++++++++++++++++++++++-- security/tomoyo/realpath.h | 5 security/tomoyo/tomoyo.c | 18 +- 7 files changed, 733 insertions(+), 358 deletions(-) --- security-testing-2.6.git.orig/security/tomoyo/common.c +++ security-testing-2.6.git/security/tomoyo/common.c @@ -12,10 +12,14 @@ #include <linux/uaccess.h> #include <linux/security.h> #include <linux/hardirq.h> +#include <linux/kthread.h> #include "realpath.h" #include "common.h" #include "tomoyo.h" +atomic_t tomoyo_users_counter[2]; +atomic_t tomoyo_users_counter_idx; + /* Has loading policy done? */ bool tomoyo_policy_loaded; @@ -340,10 +344,9 @@ bool tomoyo_is_domain_def(const unsigned * * @domainname: The domainname to find. * - * Caller must call down_read(&tomoyo_domain_list_lock); or - * down_write(&tomoyo_domain_list_lock); . - * * Returns pointer to "struct tomoyo_domain_info" if found, NULL otherwise. + * + * Caller holds tomoyo_lock(). */ struct tomoyo_domain_info *tomoyo_find_domain(const char *domainname) { @@ -352,7 +355,7 @@ struct tomoyo_domain_info *tomoyo_find_d name.name = domainname; tomoyo_fill_path_info(&name); - list_for_each_entry(domain, &tomoyo_domain_list, list) { + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { if (!domain->is_deleted && !tomoyo_pathcmp(&name, domain->domainname)) return domain; @@ -788,6 +791,8 @@ bool tomoyo_verbose_mode(const struct to * @domain: Pointer to "struct tomoyo_domain_info". * * Returns true if the domain is not exceeded quota, false otherwise. + * + * Caller holds tomoyo_lock(). */ bool tomoyo_domain_quota_is_ok(struct tomoyo_domain_info * const domain) { @@ -796,8 +801,7 @@ bool tomoyo_domain_quota_is_ok(struct to if (!domain) return true; - down_read(&tomoyo_domain_acl_info_list_lock); - list_for_each_entry(ptr, &domain->acl_info_list, list) { + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { if (ptr->type & TOMOYO_ACL_DELETED) continue; switch (tomoyo_acl_type2(ptr)) { @@ -850,7 +854,6 @@ bool tomoyo_domain_quota_is_ok(struct to break; } } - up_read(&tomoyo_domain_acl_info_list_lock); if (count < tomoyo_check_flags(domain, TOMOYO_MAX_ACCEPT_ENTRY)) return true; if (!domain->quota_warned) { @@ -1029,27 +1032,6 @@ static int tomoyo_read_profile(struct to } /* - * tomoyo_policy_manager_entry is a structure which is used for holding list of - * domainnames or programs which are permitted to modify configuration via - * /sys/kernel/security/tomoyo/ interface. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_policy_manager_list . - * (2) "manager" is a domainname or a program's pathname. - * (3) "is_domain" is a bool which is true if "manager" is a domainname, false - * otherwise. - * (4) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - */ -struct tomoyo_policy_manager_entry { - struct list_head list; - /* A path to program or a domainname. */ - const struct tomoyo_path_info *manager; - bool is_domain; /* True if manager is a domainname. */ - bool is_deleted; /* True if this entry is deleted. */ -}; - -/* * tomoyo_policy_manager_list is used for holding list of domainnames or * programs which are permitted to modify configuration via * /sys/kernel/security/tomoyo/ interface. @@ -1079,8 +1061,7 @@ struct tomoyo_policy_manager_entry { * * # cat /sys/kernel/security/tomoyo/manager */ -static LIST_HEAD(tomoyo_policy_manager_list); -static DECLARE_RWSEM(tomoyo_policy_manager_list_lock); +LIST_HEAD(tomoyo_policy_manager_list); /** * tomoyo_update_manager_entry - Add a manager entry. @@ -1112,8 +1093,8 @@ static int tomoyo_update_manager_entry(c return -ENOMEM; if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_policy_manager_list_lock); - list_for_each_entry(ptr, &tomoyo_policy_manager_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_policy_manager_list, list) { if (ptr->manager != saved_manager) continue; ptr->is_deleted = is_delete; @@ -1124,11 +1105,12 @@ static int tomoyo_update_manager_entry(c new_entry->manager = saved_manager; saved_manager = NULL; new_entry->is_domain = is_domain; - list_add_tail(&new_entry->list, &tomoyo_policy_manager_list); + list_add_tail_rcu(&new_entry->list, + &tomoyo_policy_manager_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_policy_manager_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_manager); kfree(new_entry); return error; @@ -1167,9 +1149,8 @@ static int tomoyo_read_manager_policy(st if (head->read_eof) return 0; - down_read(&tomoyo_policy_manager_list_lock); - list_for_each_cookie(pos, head->read_var2, - &tomoyo_policy_manager_list) { + list_for_each_cookie_rcu(pos, head->read_var2, + &tomoyo_policy_manager_list) { struct tomoyo_policy_manager_entry *ptr; ptr = list_entry(pos, struct tomoyo_policy_manager_entry, list); @@ -1179,7 +1160,6 @@ static int tomoyo_read_manager_policy(st if (!done) break; } - up_read(&tomoyo_policy_manager_list_lock); head->read_eof = done; return 0; } @@ -1189,6 +1169,8 @@ static int tomoyo_read_manager_policy(st * * Returns true if the current process is permitted to modify policy * via /sys/kernel/security/tomoyo/ interface. + * + * Caller holds tomoyo_lock(). */ static bool tomoyo_is_policy_manager(void) { @@ -1202,29 +1184,25 @@ static bool tomoyo_is_policy_manager(voi return true; if (!tomoyo_manage_by_non_root && (task->cred->uid || task->cred->euid)) return false; - down_read(&tomoyo_policy_manager_list_lock); - list_for_each_entry(ptr, &tomoyo_policy_manager_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_policy_manager_list, list) { if (!ptr->is_deleted && ptr->is_domain && !tomoyo_pathcmp(domainname, ptr->manager)) { found = true; break; } } - up_read(&tomoyo_policy_manager_list_lock); if (found) return true; exe = tomoyo_get_exe(); if (!exe) return false; - down_read(&tomoyo_policy_manager_list_lock); - list_for_each_entry(ptr, &tomoyo_policy_manager_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_policy_manager_list, list) { if (!ptr->is_deleted && !ptr->is_domain && !strcmp(exe, ptr->manager->name)) { found = true; break; } } - up_read(&tomoyo_policy_manager_list_lock); if (!found) { /* Reduce error messages. */ static pid_t last_pid; const pid_t pid = current->pid; @@ -1245,6 +1223,8 @@ static bool tomoyo_is_policy_manager(voi * @data: String to parse. * * Returns true on success, false otherwise. + * + * Caller holds tomoyo_lock(). */ static bool tomoyo_is_select_one(struct tomoyo_io_buffer *head, const char *data) @@ -1260,11 +1240,8 @@ static bool tomoyo_is_select_one(struct domain = tomoyo_real_domain(p); read_unlock(&tasklist_lock); } else if (!strncmp(data, "domain=", 7)) { - if (tomoyo_is_domain_def(data + 7)) { - down_read(&tomoyo_domain_list_lock); + if (tomoyo_is_domain_def(data + 7)) domain = tomoyo_find_domain(data + 7); - up_read(&tomoyo_domain_list_lock); - } } else return false; head->write_var1 = domain; @@ -1278,13 +1255,11 @@ static bool tomoyo_is_select_one(struct if (domain) { struct tomoyo_domain_info *d; head->read_var1 = NULL; - down_read(&tomoyo_domain_list_lock); - list_for_each_entry(d, &tomoyo_domain_list, list) { + list_for_each_entry_rcu(d, &tomoyo_domain_list, list) { if (d == domain) break; head->read_var1 = &d->list; } - up_read(&tomoyo_domain_list_lock); head->read_var2 = NULL; head->read_bit = 0; head->read_step = 0; @@ -1300,6 +1275,8 @@ static bool tomoyo_is_select_one(struct * @domainname: The name of domain. * * Returns 0. + * + * Caller holds tomoyo_lock(). */ static int tomoyo_delete_domain(char *domainname) { @@ -1308,9 +1285,9 @@ static int tomoyo_delete_domain(char *do name.name = domainname; tomoyo_fill_path_info(&name); - down_write(&tomoyo_domain_list_lock); + mutex_lock(&tomoyo_policy_lock); /* Is there an active domain? */ - list_for_each_entry(domain, &tomoyo_domain_list, list) { + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { /* Never delete tomoyo_kernel_domain */ if (domain == &tomoyo_kernel_domain) continue; @@ -1320,7 +1297,7 @@ static int tomoyo_delete_domain(char *do domain->is_deleted = true; break; } - up_write(&tomoyo_domain_list_lock); + mutex_unlock(&tomoyo_policy_lock); return 0; } @@ -1330,6 +1307,8 @@ static int tomoyo_delete_domain(char *do * @head: Pointer to "struct tomoyo_io_buffer". * * Returns 0 on success, negative value otherwise. + * + * Caller holds tomoyo_lock(). */ static int tomoyo_write_domain_policy(struct tomoyo_io_buffer *head) { @@ -1352,11 +1331,9 @@ static int tomoyo_write_domain_policy(st domain = NULL; if (is_delete) tomoyo_delete_domain(data); - else if (is_select) { - down_read(&tomoyo_domain_list_lock); + else if (is_select) domain = tomoyo_find_domain(data); - up_read(&tomoyo_domain_list_lock); - } else + else domain = tomoyo_find_or_assign_new_domain(data, 0); head->write_var1 = domain; return 0; @@ -1511,8 +1488,7 @@ static int tomoyo_read_domain_policy(str return 0; if (head->read_step == 0) head->read_step = 1; - down_read(&tomoyo_domain_list_lock); - list_for_each_cookie(dpos, head->read_var1, &tomoyo_domain_list) { + list_for_each_cookie_rcu(dpos, head->read_var1, &tomoyo_domain_list) { struct tomoyo_domain_info *domain; const char *quota_exceeded = ""; const char *transition_failed = ""; @@ -1543,9 +1519,8 @@ acl_loop: if (head->read_step == 3) goto tail_mark; /* Print ACL entries in the domain. */ - down_read(&tomoyo_domain_acl_info_list_lock); - list_for_each_cookie(apos, head->read_var2, - &domain->acl_info_list) { + list_for_each_cookie_rcu(apos, head->read_var2, + &domain->acl_info_list) { struct tomoyo_acl_info *ptr = list_entry(apos, struct tomoyo_acl_info, list); @@ -1553,7 +1528,6 @@ acl_loop: if (!done) break; } - up_read(&tomoyo_domain_acl_info_list_lock); if (!done) break; head->read_step = 3; @@ -1565,7 +1539,6 @@ tail_mark: if (head->read_single_domain) break; } - up_read(&tomoyo_domain_list_lock); head->read_eof = done; return 0; } @@ -1581,6 +1554,8 @@ tail_mark: * * ( echo "select " $domainname; echo "use_profile " $profile ) | * /usr/lib/ccs/loadpolicy -d + * + * Caller holds tomoyo_lock(). */ static int tomoyo_write_domain_profile(struct tomoyo_io_buffer *head) { @@ -1592,9 +1567,7 @@ static int tomoyo_write_domain_profile(s if (!cp) return -EINVAL; *cp = '\0'; - down_read(&tomoyo_domain_list_lock); domain = tomoyo_find_domain(cp + 1); - up_read(&tomoyo_domain_list_lock); if (strict_strtoul(data, 10, &profile)) return -EINVAL; if (domain && profile < TOMOYO_MAX_PROFILES @@ -1624,8 +1597,7 @@ static int tomoyo_read_domain_profile(st if (head->read_eof) return 0; - down_read(&tomoyo_domain_list_lock); - list_for_each_cookie(pos, head->read_var1, &tomoyo_domain_list) { + list_for_each_cookie_rcu(pos, head->read_var1, &tomoyo_domain_list) { struct tomoyo_domain_info *domain; domain = list_entry(pos, struct tomoyo_domain_info, list); if (domain->is_deleted) @@ -1635,7 +1607,6 @@ static int tomoyo_read_domain_profile(st if (!done) break; } - up_read(&tomoyo_domain_list_lock); head->read_eof = done; return 0; } @@ -1854,16 +1825,24 @@ void tomoyo_load_policy(const char *file printk(KERN_INFO "Mandatory Access Control activated.\n"); tomoyo_policy_loaded = true; { /* Check all profiles currently assigned to domains are defined. */ + const int idx = tomoyo_lock(); struct tomoyo_domain_info *domain; - down_read(&tomoyo_domain_list_lock); - list_for_each_entry(domain, &tomoyo_domain_list, list) { + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { const u8 profile = domain->profile; if (tomoyo_profile_ptr[profile]) continue; panic("Profile %u (used by '%s') not defined.\n", profile, domain->domainname->name); } - up_read(&tomoyo_domain_list_lock); + tomoyo_unlock(idx); + } + { + struct task_struct *task = + kthread_create(tomoyo_gc_thread, NULL, "GC for TOMOYO"); + if (IS_ERR(task)) + printk(KERN_ERR "GC thread not available.\n"); + else + wake_up_process(task); } } @@ -1997,6 +1976,7 @@ static int tomoyo_open_control(const u8 } } file->private_data = head; + head->tomoyo_users_counter_index = tomoyo_lock(); /* * Call the handler now if the file is * /sys/kernel/security/tomoyo/self_domain @@ -2114,6 +2094,7 @@ static int tomoyo_write_control(struct f static int tomoyo_close_control(struct file *file) { struct tomoyo_io_buffer *head = file->private_data; + tomoyo_unlock(head->tomoyo_users_counter_index); /* Release memory used for policy I/O. */ tomoyo_free(head->read_buf); --- security-testing-2.6.git.orig/security/tomoyo/common.h +++ security-testing-2.6.git/security/tomoyo/common.h @@ -156,6 +156,7 @@ struct tomoyo_domain_info { struct list_head acl_info_list; /* Name of this domain. Never NULL. */ const struct tomoyo_path_info *domainname; + atomic_t users; u8 profile; /* Profile number to use. */ bool is_deleted; /* Delete flag. */ bool quota_warned; /* Quota warnning flag. */ @@ -266,6 +267,8 @@ struct tomoyo_io_buffer { int (*write) (struct tomoyo_io_buffer *); /* Exclusive lock for this structure. */ struct mutex io_sem; + /* counter which this structure locked. */ + int tomoyo_users_counter_index; /* The position currently reading from. */ struct list_head *read_var1; /* Extra variables for reading. */ @@ -421,10 +424,9 @@ static inline bool tomoyo_is_invalid(con /* The list for "struct tomoyo_domain_info". */ extern struct list_head tomoyo_domain_list; -extern struct rw_semaphore tomoyo_domain_list_lock; -/* Lock for domain->acl_info_list. */ -extern struct rw_semaphore tomoyo_domain_acl_info_list_lock; +/* Lock for modifying policy. */ +extern struct mutex tomoyo_policy_lock; /* Has /sbin/init started? */ extern bool tomoyo_policy_loaded; @@ -433,21 +435,193 @@ extern bool tomoyo_policy_loaded; extern struct tomoyo_domain_info tomoyo_kernel_domain; /** - * list_for_each_cookie - iterate over a list with cookie. + * list_for_each_cookie_rcu - iterate over a list with cookie. * @pos: the &struct list_head to use as a loop cursor. * @cookie: the &struct list_head to use as a cookie. * @head: the head for your list. * - * Same with list_for_each() except that this primitive uses @cookie + * Same with __list_for_each_rcu() except that this primitive uses @cookie * so that we can continue iteration. * @cookie must be NULL when iteration starts, and @cookie will become * NULL when iteration finishes. */ -#define list_for_each_cookie(pos, cookie, head) \ +#define list_for_each_cookie_rcu(pos, cookie, head) \ for (({ if (!cookie) \ - cookie = head; }), \ - pos = (cookie)->next; \ + cookie = head; }), \ + pos = rcu_dereference((cookie)->next); \ prefetch(pos->next), pos != (head) || ((cookie) = NULL); \ - (cookie) = pos, pos = pos->next) + (cookie) = pos, pos = rcu_dereference(pos->next)) + +extern atomic_t tomoyo_users_counter[2]; +extern atomic_t tomoyo_users_counter_idx; + +static inline int tomoyo_lock(void) +{ + int idx = atomic_read(&tomoyo_users_counter_idx); + atomic_inc(&tomoyo_users_counter[idx]); + return idx; +} + +static inline void tomoyo_unlock(int idx) +{ + atomic_dec(&tomoyo_users_counter[idx]); +} + +/* + * tomoyo_policy_manager_entry is a structure which is used for holding list of + * domainnames or programs which are permitted to modify configuration via + * /sys/kernel/security/tomoyo/ interface. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_policy_manager_list . + * (2) "manager" is a domainname or a program's pathname. + * (3) "is_domain" is a bool which is true if "manager" is a domainname, false + * otherwise. + * (4) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + */ +struct tomoyo_policy_manager_entry { + struct list_head list; + /* A path to program or a domainname. */ + const struct tomoyo_path_info *manager; + bool is_domain; /* True if manager is a domainname. */ + bool is_deleted; /* True if this entry is deleted. */ +}; + +extern struct list_head tomoyo_policy_manager_list; + +/* + * tomoyo_globally_readable_file_entry is a structure which is used for holding + * "allow_read" entries. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_globally_readable_list . + * (2) "filename" is a pathname which is allowed to open(O_RDONLY). + * (3) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + */ +struct tomoyo_globally_readable_file_entry { + struct list_head list; + const struct tomoyo_path_info *filename; + bool is_deleted; +}; + +extern struct list_head tomoyo_globally_readable_list; + +/* + * tomoyo_pattern_entry is a structure which is used for holding + * "tomoyo_pattern_list" entries. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_pattern_list . + * (2) "pattern" is a pathname pattern which is used for converting pathnames + * to pathname patterns during learning mode. + * (3) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + */ +struct tomoyo_pattern_entry { + struct list_head list; + const struct tomoyo_path_info *pattern; + bool is_deleted; +}; + +extern struct list_head tomoyo_pattern_list; + +/* + * tomoyo_no_rewrite_entry is a structure which is used for holding + * "deny_rewrite" entries. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_no_rewrite_list . + * (2) "pattern" is a pathname which is by default not permitted to modify + * already existing content. + * (3) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + */ +struct tomoyo_no_rewrite_entry { + struct list_head list; + const struct tomoyo_path_info *pattern; + bool is_deleted; +}; + +extern struct list_head tomoyo_no_rewrite_list; + +/* + * tomoyo_domain_initializer_entry is a structure which is used for holding + * "initialize_domain" and "no_initialize_domain" entries. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_domain_initializer_list . + * (2) "domainname" which is "a domainname" or "the last component of a + * domainname". This field is NULL if "from" clause is not specified. + * (3) "program" which is a program's pathname. + * (4) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + * (5) "is_not" is a bool which is true if "no_initialize_domain", false + * otherwise. + * (6) "is_last_name" is a bool which is true if "domainname" is "the last + * component of a domainname", false otherwise. + */ +struct tomoyo_domain_initializer_entry { + struct list_head list; + const struct tomoyo_path_info *domainname; /* This may be NULL */ + const struct tomoyo_path_info *program; + bool is_deleted; + bool is_not; /* True if this entry is "no_initialize_domain". */ + /* True if the domainname is tomoyo_get_last_name(). */ + bool is_last_name; +}; + +extern struct list_head tomoyo_domain_initializer_list; + +/* + * tomoyo_domain_keeper_entry is a structure which is used for holding + * "keep_domain" and "no_keep_domain" entries. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_domain_keeper_list . + * (2) "domainname" which is "a domainname" or "the last component of a + * domainname". + * (3) "program" which is a program's pathname. + * This field is NULL if "from" clause is not specified. + * (4) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + * (5) "is_not" is a bool which is true if "no_initialize_domain", false + * otherwise. + * (6) "is_last_name" is a bool which is true if "domainname" is "the last + * component of a domainname", false otherwise. + */ +struct tomoyo_domain_keeper_entry { + struct list_head list; + const struct tomoyo_path_info *domainname; + const struct tomoyo_path_info *program; /* This may be NULL */ + bool is_deleted; + bool is_not; /* True if this entry is "no_keep_domain". */ + /* True if the domainname is tomoyo_get_last_name(). */ + bool is_last_name; +}; + +extern struct list_head tomoyo_domain_keeper_list; + +/* + * tomoyo_alias_entry is a structure which is used for holding "alias" entries. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_alias_list . + * (2) "original_name" which is a dereferenced pathname. + * (3) "aliased_name" which is a symlink's pathname. + * (4) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + */ +struct tomoyo_alias_entry { + struct list_head list; + const struct tomoyo_path_info *original_name; + const struct tomoyo_path_info *aliased_name; + bool is_deleted; +}; + +extern struct list_head tomoyo_alias_list; + +int tomoyo_gc_thread(void *unused); #endif /* !defined(_SECURITY_TOMOYO_COMMON_H) */ --- security-testing-2.6.git.orig/security/tomoyo/domain.c +++ security-testing-2.6.git/security/tomoyo/domain.c @@ -58,77 +58,6 @@ struct tomoyo_domain_info tomoyo_kernel_ * exceptions. */ LIST_HEAD(tomoyo_domain_list); -DECLARE_RWSEM(tomoyo_domain_list_lock); - -/* - * tomoyo_domain_initializer_entry is a structure which is used for holding - * "initialize_domain" and "no_initialize_domain" entries. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_domain_initializer_list . - * (2) "domainname" which is "a domainname" or "the last component of a - * domainname". This field is NULL if "from" clause is not specified. - * (3) "program" which is a program's pathname. - * (4) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - * (5) "is_not" is a bool which is true if "no_initialize_domain", false - * otherwise. - * (6) "is_last_name" is a bool which is true if "domainname" is "the last - * component of a domainname", false otherwise. - */ -struct tomoyo_domain_initializer_entry { - struct list_head list; - const struct tomoyo_path_info *domainname; /* This may be NULL */ - const struct tomoyo_path_info *program; - bool is_deleted; - bool is_not; /* True if this entry is "no_initialize_domain". */ - /* True if the domainname is tomoyo_get_last_name(). */ - bool is_last_name; -}; - -/* - * tomoyo_domain_keeper_entry is a structure which is used for holding - * "keep_domain" and "no_keep_domain" entries. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_domain_keeper_list . - * (2) "domainname" which is "a domainname" or "the last component of a - * domainname". - * (3) "program" which is a program's pathname. - * This field is NULL if "from" clause is not specified. - * (4) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - * (5) "is_not" is a bool which is true if "no_initialize_domain", false - * otherwise. - * (6) "is_last_name" is a bool which is true if "domainname" is "the last - * component of a domainname", false otherwise. - */ -struct tomoyo_domain_keeper_entry { - struct list_head list; - const struct tomoyo_path_info *domainname; - const struct tomoyo_path_info *program; /* This may be NULL */ - bool is_deleted; - bool is_not; /* True if this entry is "no_keep_domain". */ - /* True if the domainname is tomoyo_get_last_name(). */ - bool is_last_name; -}; - -/* - * tomoyo_alias_entry is a structure which is used for holding "alias" entries. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_alias_list . - * (2) "original_name" which is a dereferenced pathname. - * (3) "aliased_name" which is a symlink's pathname. - * (4) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - */ -struct tomoyo_alias_entry { - struct list_head list; - const struct tomoyo_path_info *original_name; - const struct tomoyo_path_info *aliased_name; - bool is_deleted; -}; /** * tomoyo_get_last_name - Get last component of a domainname. @@ -183,8 +112,7 @@ const char *tomoyo_get_last_name(const s * will cause "/usr/sbin/httpd" to belong to "<kernel> /usr/sbin/httpd" domain * unless executed from "<kernel> /etc/rc.d/init.d/httpd" domain. */ -static LIST_HEAD(tomoyo_domain_initializer_list); -static DECLARE_RWSEM(tomoyo_domain_initializer_list_lock); +LIST_HEAD(tomoyo_domain_initializer_list); /** * tomoyo_update_domain_initializer_entry - Update "struct tomoyo_domain_initializer_entry" list. @@ -227,8 +155,8 @@ static int tomoyo_update_domain_initiali } if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_domain_initializer_list_lock); - list_for_each_entry(ptr, &tomoyo_domain_initializer_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_domain_initializer_list, list) { if (ptr->is_not != is_not || ptr->domainname != saved_domainname || ptr->program != saved_program) @@ -244,12 +172,12 @@ static int tomoyo_update_domain_initiali saved_program = NULL; new_entry->is_not = is_not; new_entry->is_last_name = is_last_name; - list_add_tail(&new_entry->list, - &tomoyo_domain_initializer_list); + list_add_tail_rcu(&new_entry->list, + &tomoyo_domain_initializer_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_domain_initializer_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_domainname); tomoyo_put_name(saved_program); kfree(new_entry); @@ -268,15 +196,14 @@ bool tomoyo_read_domain_initializer_poli struct list_head *pos; bool done = true; - down_read(&tomoyo_domain_initializer_list_lock); - list_for_each_cookie(pos, head->read_var2, - &tomoyo_domain_initializer_list) { + list_for_each_cookie_rcu(pos, head->read_var2, + &tomoyo_domain_initializer_list) { const char *no; const char *from = ""; const char *domain = ""; struct tomoyo_domain_initializer_entry *ptr; ptr = list_entry(pos, struct tomoyo_domain_initializer_entry, - list); + list); if (ptr->is_deleted) continue; no = ptr->is_not ? "no_" : ""; @@ -291,7 +218,6 @@ bool tomoyo_read_domain_initializer_poli if (!done) break; } - up_read(&tomoyo_domain_initializer_list_lock); return done; } @@ -328,6 +254,8 @@ int tomoyo_write_domain_initializer_poli * * Returns true if executing @program reinitializes domain transition, * false otherwise. + * + * Caller holds tomoyo_lock(). */ static bool tomoyo_is_domain_initializer(const struct tomoyo_path_info * domainname, @@ -338,8 +266,7 @@ static bool tomoyo_is_domain_initializer struct tomoyo_domain_initializer_entry *ptr; bool flag = false; - down_read(&tomoyo_domain_initializer_list_lock); - list_for_each_entry(ptr, &tomoyo_domain_initializer_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_domain_initializer_list, list) { if (ptr->is_deleted) continue; if (ptr->domainname) { @@ -359,7 +286,6 @@ static bool tomoyo_is_domain_initializer } flag = true; } - up_read(&tomoyo_domain_initializer_list_lock); return flag; } @@ -401,8 +327,7 @@ static bool tomoyo_is_domain_initializer * "<kernel> /usr/sbin/sshd /bin/bash /usr/bin/passwd" domain, unless * explicitly specified by "initialize_domain". */ -static LIST_HEAD(tomoyo_domain_keeper_list); -static DECLARE_RWSEM(tomoyo_domain_keeper_list_lock); +LIST_HEAD(tomoyo_domain_keeper_list); /** * tomoyo_update_domain_keeper_entry - Update "struct tomoyo_domain_keeper_entry" list. @@ -445,8 +370,8 @@ static int tomoyo_update_domain_keeper_e } if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_domain_keeper_list_lock); - list_for_each_entry(ptr, &tomoyo_domain_keeper_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_domain_keeper_list, list) { if (ptr->is_not != is_not || ptr->domainname != saved_domainname || ptr->program != saved_program) @@ -462,11 +387,12 @@ static int tomoyo_update_domain_keeper_e saved_program = NULL; new_entry->is_not = is_not; new_entry->is_last_name = is_last_name; - list_add_tail(&new_entry->list, &tomoyo_domain_keeper_list); + list_add_tail_rcu(&new_entry->list, + &tomoyo_domain_keeper_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_domain_keeper_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_domainname); tomoyo_put_name(saved_program); kfree(new_entry); @@ -506,9 +432,8 @@ bool tomoyo_read_domain_keeper_policy(st struct list_head *pos; bool done = true; - down_read(&tomoyo_domain_keeper_list_lock); - list_for_each_cookie(pos, head->read_var2, - &tomoyo_domain_keeper_list) { + list_for_each_cookie_rcu(pos, head->read_var2, + &tomoyo_domain_keeper_list) { struct tomoyo_domain_keeper_entry *ptr; const char *no; const char *from = ""; @@ -529,7 +454,6 @@ bool tomoyo_read_domain_keeper_policy(st if (!done) break; } - up_read(&tomoyo_domain_keeper_list_lock); return done; } @@ -542,6 +466,8 @@ bool tomoyo_read_domain_keeper_policy(st * * Returns true if executing @program supresses domain transition, * false otherwise. + * + * Caller holds tomoyo_lock(). */ static bool tomoyo_is_domain_keeper(const struct tomoyo_path_info *domainname, const struct tomoyo_path_info *program, @@ -550,8 +476,7 @@ static bool tomoyo_is_domain_keeper(cons struct tomoyo_domain_keeper_entry *ptr; bool flag = false; - down_read(&tomoyo_domain_keeper_list_lock); - list_for_each_entry(ptr, &tomoyo_domain_keeper_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_domain_keeper_list, list) { if (ptr->is_deleted) continue; if (!ptr->is_last_name) { @@ -569,7 +494,6 @@ static bool tomoyo_is_domain_keeper(cons } flag = true; } - up_read(&tomoyo_domain_keeper_list_lock); return flag; } @@ -603,8 +527,7 @@ static bool tomoyo_is_domain_keeper(cons * /bin/busybox and domainname which the current process will belong to after * execve() succeeds is calculated using /bin/cat rather than /bin/busybox . */ -static LIST_HEAD(tomoyo_alias_list); -static DECLARE_RWSEM(tomoyo_alias_list_lock); +LIST_HEAD(tomoyo_alias_list); /** * tomoyo_update_alias_entry - Update "struct tomoyo_alias_entry" list. @@ -637,8 +560,8 @@ static int tomoyo_update_alias_entry(con } if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_alias_list_lock); - list_for_each_entry(ptr, &tomoyo_alias_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_alias_list, list) { if (ptr->original_name != saved_original_name || ptr->aliased_name != saved_aliased_name) continue; @@ -651,11 +574,11 @@ static int tomoyo_update_alias_entry(con saved_original_name = NULL; new_entry->aliased_name = saved_aliased_name; saved_aliased_name = NULL; - list_add_tail(&new_entry->list, &tomoyo_alias_list); + list_add_tail_rcu(&new_entry->list, &tomoyo_alias_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_alias_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_original_name); tomoyo_put_name(saved_aliased_name); kfree(new_entry); @@ -674,8 +597,7 @@ bool tomoyo_read_alias_policy(struct tom struct list_head *pos; bool done = true; - down_read(&tomoyo_alias_list_lock); - list_for_each_cookie(pos, head->read_var2, &tomoyo_alias_list) { + list_for_each_cookie_rcu(pos, head->read_var2, &tomoyo_alias_list) { struct tomoyo_alias_entry *ptr; ptr = list_entry(pos, struct tomoyo_alias_entry, list); @@ -687,7 +609,6 @@ bool tomoyo_read_alias_policy(struct tom if (!done) break; } - up_read(&tomoyo_alias_list_lock); return done; } @@ -731,52 +652,18 @@ struct tomoyo_domain_info *tomoyo_find_o if (!saved_domainname) return NULL; new_domain = kmalloc(sizeof(*new_domain), GFP_KERNEL); - down_write(&tomoyo_domain_list_lock); + mutex_lock(&tomoyo_policy_lock); domain = tomoyo_find_domain(domainname); - if (domain) - goto out; - /* Can I reuse memory of deleted domain? */ - list_for_each_entry(domain, &tomoyo_domain_list, list) { - struct task_struct *p; - struct tomoyo_acl_info *ptr; - bool flag; - if (!domain->is_deleted || - domain->domainname != saved_domainname) - continue; - flag = false; - read_lock(&tasklist_lock); - for_each_process(p) { - if (tomoyo_real_domain(p) != domain) - continue; - flag = true; - break; - } - read_unlock(&tasklist_lock); - if (flag) - continue; - list_for_each_entry(ptr, &domain->acl_info_list, list) { - ptr->type |= TOMOYO_ACL_DELETED; - } - domain->ignore_global_allow_read = false; - domain->domain_transition_failed = false; - domain->profile = profile; - domain->quota_warned = false; - mb(); /* Avoid out-of-order execution. */ - domain->is_deleted = false; - goto out; - } - /* No memory reusable. Create using new memory. */ - if (tomoyo_memory_ok(new_domain)) { + if (!domain && tomoyo_memory_ok(new_domain)) { domain = new_domain; new_domain = NULL; INIT_LIST_HEAD(&domain->acl_info_list); domain->domainname = saved_domainname; saved_domainname = NULL; domain->profile = profile; - list_add_tail(&domain->list, &tomoyo_domain_list); + list_add_tail_rcu(&domain->list, &tomoyo_domain_list); } - out: - up_write(&tomoyo_domain_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_domainname); kfree(new_domain); return domain; @@ -788,6 +675,8 @@ struct tomoyo_domain_info *tomoyo_find_o * @bprm: Pointer to "struct linux_binprm". * * Returns 0 on success, negative value otherwise. + * + * Caller holds tomoyo_lock(). */ int tomoyo_find_next_domain(struct linux_binprm *bprm) { @@ -810,6 +699,7 @@ int tomoyo_find_next_domain(struct linux struct tomoyo_path_info s; /* symlink name */ struct tomoyo_path_info l; /* last name */ static bool initialized; + const int idx = tomoyo_lock(); if (!tmp) goto out; @@ -848,8 +738,7 @@ int tomoyo_find_next_domain(struct linux if (tomoyo_pathcmp(&r, &s)) { struct tomoyo_alias_entry *ptr; /* Is this program allowed to be called via symbolic links? */ - down_read(&tomoyo_alias_list_lock); - list_for_each_entry(ptr, &tomoyo_alias_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_alias_list, list) { if (ptr->is_deleted || tomoyo_pathcmp(&r, ptr->original_name) || tomoyo_pathcmp(&s, ptr->aliased_name)) @@ -860,7 +749,6 @@ int tomoyo_find_next_domain(struct linux tomoyo_fill_path_info(&r); break; } - up_read(&tomoyo_alias_list_lock); } /* Check execute permission. */ @@ -891,9 +779,7 @@ int tomoyo_find_next_domain(struct linux } if (domain || strlen(new_domain_name) >= TOMOYO_MAX_PATHNAME_LEN) goto done; - down_read(&tomoyo_domain_list_lock); domain = tomoyo_find_domain(new_domain_name); - up_read(&tomoyo_domain_list_lock); if (domain) goto done; if (is_enforce) @@ -910,9 +796,12 @@ int tomoyo_find_next_domain(struct linux else old_domain->domain_transition_failed = true; out: + BUG_ON(bprm->cred->security); if (!domain) domain = old_domain; + atomic_inc(&domain->users); bprm->cred->security = domain; + tomoyo_unlock(idx); tomoyo_free(real_program_name); tomoyo_free(symlink_program_name); tomoyo_free(tmp); --- security-testing-2.6.git.orig/security/tomoyo/file.c +++ security-testing-2.6.git/security/tomoyo/file.c @@ -14,56 +14,6 @@ #include "realpath.h" #define ACC_MODE(x) ("\000\004\002\006"[(x)&O_ACCMODE]) -/* - * tomoyo_globally_readable_file_entry is a structure which is used for holding - * "allow_read" entries. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_globally_readable_list . - * (2) "filename" is a pathname which is allowed to open(O_RDONLY). - * (3) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - */ -struct tomoyo_globally_readable_file_entry { - struct list_head list; - const struct tomoyo_path_info *filename; - bool is_deleted; -}; - -/* - * tomoyo_pattern_entry is a structure which is used for holding - * "tomoyo_pattern_list" entries. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_pattern_list . - * (2) "pattern" is a pathname pattern which is used for converting pathnames - * to pathname patterns during learning mode. - * (3) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - */ -struct tomoyo_pattern_entry { - struct list_head list; - const struct tomoyo_path_info *pattern; - bool is_deleted; -}; - -/* - * tomoyo_no_rewrite_entry is a structure which is used for holding - * "deny_rewrite" entries. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_no_rewrite_list . - * (2) "pattern" is a pathname which is by default not permitted to modify - * already existing content. - * (3) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - */ -struct tomoyo_no_rewrite_entry { - struct list_head list; - const struct tomoyo_path_info *pattern; - bool is_deleted; -}; - /* Keyword array for single path operations. */ static const char *tomoyo_sp_keyword[TOMOYO_MAX_SINGLE_PATH_OPERATION] = { [TOMOYO_TYPE_READ_WRITE_ACL] = "read/write", @@ -159,8 +109,8 @@ static struct tomoyo_path_info *tomoyo_g return NULL; } -/* Lock for domain->acl_info_list. */ -DECLARE_RWSEM(tomoyo_domain_acl_info_list_lock); +/* Lock for modifying TOMOYO's policy. */ +DEFINE_MUTEX(tomoyo_policy_lock); static int tomoyo_update_double_path_acl(const u8 type, const char *filename1, const char *filename2, @@ -195,8 +145,7 @@ static int tomoyo_update_single_path_acl * given "allow_read /lib/libc-2.5.so" to the domain which current process * belongs to. */ -static LIST_HEAD(tomoyo_globally_readable_list); -static DECLARE_RWSEM(tomoyo_globally_readable_list_lock); +LIST_HEAD(tomoyo_globally_readable_list); /** * tomoyo_update_globally_readable_entry - Update "struct tomoyo_globally_readable_file_entry" list. @@ -221,8 +170,8 @@ static int tomoyo_update_globally_readab return -ENOMEM; if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_globally_readable_list_lock); - list_for_each_entry(ptr, &tomoyo_globally_readable_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_globally_readable_list, list) { if (ptr->filename != saved_filename) continue; ptr->is_deleted = is_delete; @@ -232,11 +181,12 @@ static int tomoyo_update_globally_readab if (!is_delete && error && tomoyo_memory_ok(new_entry)) { new_entry->filename = saved_filename; saved_filename = NULL; - list_add_tail(&new_entry->list, &tomoyo_globally_readable_list); + list_add_tail_rcu(&new_entry->list, + &tomoyo_globally_readable_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_globally_readable_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_filename); kfree(new_entry); return error; @@ -248,21 +198,21 @@ static int tomoyo_update_globally_readab * @filename: The filename to check. * * Returns true if any domain can open @filename for reading, false otherwise. + * + * Caller holds tomoyo_lock(). */ static bool tomoyo_is_globally_readable_file(const struct tomoyo_path_info * filename) { struct tomoyo_globally_readable_file_entry *ptr; bool found = false; - down_read(&tomoyo_globally_readable_list_lock); - list_for_each_entry(ptr, &tomoyo_globally_readable_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_globally_readable_list, list) { if (!ptr->is_deleted && tomoyo_path_matches_pattern(filename, ptr->filename)) { found = true; break; } } - up_read(&tomoyo_globally_readable_list_lock); return found; } @@ -291,9 +241,8 @@ bool tomoyo_read_globally_readable_polic struct list_head *pos; bool done = true; - down_read(&tomoyo_globally_readable_list_lock); - list_for_each_cookie(pos, head->read_var2, - &tomoyo_globally_readable_list) { + list_for_each_cookie_rcu(pos, head->read_var2, + &tomoyo_globally_readable_list) { struct tomoyo_globally_readable_file_entry *ptr; ptr = list_entry(pos, struct tomoyo_globally_readable_file_entry, @@ -305,7 +254,6 @@ bool tomoyo_read_globally_readable_polic if (!done) break; } - up_read(&tomoyo_globally_readable_list_lock); return done; } @@ -338,8 +286,7 @@ bool tomoyo_read_globally_readable_polic * which pretends as if /proc/self/ is not a symlink; so that we can forbid * current process from accessing other process's information. */ -static LIST_HEAD(tomoyo_pattern_list); -static DECLARE_RWSEM(tomoyo_pattern_list_lock); +LIST_HEAD(tomoyo_pattern_list); /** * tomoyo_update_file_pattern_entry - Update "struct tomoyo_pattern_entry" list. @@ -364,8 +311,8 @@ static int tomoyo_update_file_pattern_en return -ENOMEM; if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_pattern_list_lock); - list_for_each_entry(ptr, &tomoyo_pattern_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_pattern_list, list) { if (saved_pattern != ptr->pattern) continue; ptr->is_deleted = is_delete; @@ -375,11 +322,11 @@ static int tomoyo_update_file_pattern_en if (!is_delete && error && tomoyo_memory_ok(new_entry)) { new_entry->pattern = saved_pattern; saved_pattern = NULL; - list_add_tail(&new_entry->list, &tomoyo_pattern_list); + list_add_tail_rcu(&new_entry->list, &tomoyo_pattern_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_pattern_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_pattern); kfree(new_entry); return error; @@ -391,6 +338,8 @@ static int tomoyo_update_file_pattern_en * @filename: The filename to find patterned pathname. * * Returns pointer to pathname pattern if matched, @filename otherwise. + * + * Caller holds tomoyo_lock(). */ static const struct tomoyo_path_info * tomoyo_get_file_pattern(const struct tomoyo_path_info *filename) @@ -398,8 +347,7 @@ tomoyo_get_file_pattern(const struct tom struct tomoyo_pattern_entry *ptr; const struct tomoyo_path_info *pattern = NULL; - down_read(&tomoyo_pattern_list_lock); - list_for_each_entry(ptr, &tomoyo_pattern_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_pattern_list, list) { if (ptr->is_deleted) continue; if (!tomoyo_path_matches_pattern(filename, ptr->pattern)) @@ -412,7 +360,6 @@ tomoyo_get_file_pattern(const struct tom break; } } - up_read(&tomoyo_pattern_list_lock); if (pattern) filename = pattern; return filename; @@ -443,8 +390,7 @@ bool tomoyo_read_file_pattern(struct tom struct list_head *pos; bool done = true; - down_read(&tomoyo_pattern_list_lock); - list_for_each_cookie(pos, head->read_var2, &tomoyo_pattern_list) { + list_for_each_cookie_rcu(pos, head->read_var2, &tomoyo_pattern_list) { struct tomoyo_pattern_entry *ptr; ptr = list_entry(pos, struct tomoyo_pattern_entry, list); if (ptr->is_deleted) @@ -454,7 +400,6 @@ bool tomoyo_read_file_pattern(struct tom if (!done) break; } - up_read(&tomoyo_pattern_list_lock); return done; } @@ -487,8 +432,7 @@ bool tomoyo_read_file_pattern(struct tom * " (deleted)" suffix if the file is already unlink()ed; so that we don't * need to worry whether the file is already unlink()ed or not. */ -static LIST_HEAD(tomoyo_no_rewrite_list); -static DECLARE_RWSEM(tomoyo_no_rewrite_list_lock); +LIST_HEAD(tomoyo_no_rewrite_list); /** * tomoyo_update_no_rewrite_entry - Update "struct tomoyo_no_rewrite_entry" list. @@ -513,8 +457,8 @@ static int tomoyo_update_no_rewrite_entr return -ENOMEM; if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_no_rewrite_list_lock); - list_for_each_entry(ptr, &tomoyo_no_rewrite_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_no_rewrite_list, list) { if (ptr->pattern != saved_pattern) continue; ptr->is_deleted = is_delete; @@ -524,11 +468,11 @@ static int tomoyo_update_no_rewrite_entr if (!is_delete && error && tomoyo_memory_ok(new_entry)) { new_entry->pattern = saved_pattern; saved_pattern = NULL; - list_add_tail(&new_entry->list, &tomoyo_no_rewrite_list); + list_add_tail_rcu(&new_entry->list, &tomoyo_no_rewrite_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_no_rewrite_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_pattern); return error; } @@ -540,14 +484,15 @@ static int tomoyo_update_no_rewrite_entr * * Returns true if @filename is specified by "deny_rewrite" directive, * false otherwise. + * + * Caller holds tomoyo_lock(). */ static bool tomoyo_is_no_rewrite_file(const struct tomoyo_path_info *filename) { struct tomoyo_no_rewrite_entry *ptr; bool found = false; - down_read(&tomoyo_no_rewrite_list_lock); - list_for_each_entry(ptr, &tomoyo_no_rewrite_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_no_rewrite_list, list) { if (ptr->is_deleted) continue; if (!tomoyo_path_matches_pattern(filename, ptr->pattern)) @@ -555,7 +500,6 @@ static bool tomoyo_is_no_rewrite_file(co found = true; break; } - up_read(&tomoyo_no_rewrite_list_lock); return found; } @@ -584,8 +528,8 @@ bool tomoyo_read_no_rewrite_policy(struc struct list_head *pos; bool done = true; - down_read(&tomoyo_no_rewrite_list_lock); - list_for_each_cookie(pos, head->read_var2, &tomoyo_no_rewrite_list) { + list_for_each_cookie_rcu(pos, head->read_var2, + &tomoyo_no_rewrite_list) { struct tomoyo_no_rewrite_entry *ptr; ptr = list_entry(pos, struct tomoyo_no_rewrite_entry, list); if (ptr->is_deleted) @@ -595,7 +539,6 @@ bool tomoyo_read_no_rewrite_policy(struc if (!done) break; } - up_read(&tomoyo_no_rewrite_list_lock); return done; } @@ -660,9 +603,9 @@ static int tomoyo_check_single_path_acl2 { struct tomoyo_acl_info *ptr; int error = -EPERM; + const int idx = tomoyo_lock(); - down_read(&tomoyo_domain_acl_info_list_lock); - list_for_each_entry(ptr, &domain->acl_info_list, list) { + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { struct tomoyo_single_path_acl_record *acl; if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_SINGLE_PATH_ACL) continue; @@ -680,7 +623,7 @@ static int tomoyo_check_single_path_acl2 error = 0; break; } - up_read(&tomoyo_domain_acl_info_list_lock); + tomoyo_unlock(idx); return error; } @@ -846,10 +789,10 @@ static int tomoyo_update_single_path_acl return -ENOMEM; if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_domain_acl_info_list_lock); + mutex_lock(&tomoyo_policy_lock); if (is_delete) goto delete; - list_for_each_entry(ptr, &domain->acl_info_list, list) { + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { struct tomoyo_single_path_acl_record *acl; if (tomoyo_acl_type1(ptr) != TOMOYO_TYPE_SINGLE_PATH_ACL) continue; @@ -877,13 +820,14 @@ static int tomoyo_update_single_path_acl new_entry->perm |= rw_mask; new_entry->filename = saved_filename; saved_filename = NULL; - list_add_tail(&new_entry->head.list, &domain->acl_info_list); + list_add_tail_rcu(&new_entry->head.list, + &domain->acl_info_list); new_entry = NULL; error = 0; } goto out; delete: - list_for_each_entry(ptr, &domain->acl_info_list, list) { + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { struct tomoyo_single_path_acl_record *acl; if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_SINGLE_PATH_ACL) continue; @@ -902,7 +846,7 @@ static int tomoyo_update_single_path_acl break; } out: - up_write(&tomoyo_domain_acl_info_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_filename); kfree(new_entry); return error; @@ -945,10 +889,10 @@ static int tomoyo_update_double_path_acl } if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_domain_acl_info_list_lock); + mutex_lock(&tomoyo_policy_lock); if (is_delete) goto delete; - list_for_each_entry(ptr, &domain->acl_info_list, list) { + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { struct tomoyo_double_path_acl_record *acl; if (tomoyo_acl_type1(ptr) != TOMOYO_TYPE_DOUBLE_PATH_ACL) continue; @@ -973,13 +917,14 @@ static int tomoyo_update_double_path_acl saved_filename1 = NULL; new_entry->filename2 = saved_filename2; saved_filename2 = NULL; - list_add_tail(&new_entry->head.list, &domain->acl_info_list); + list_add_tail_rcu(&new_entry->head.list, + &domain->acl_info_list); new_entry = NULL; error = 0; } goto out; delete: - list_for_each_entry(ptr, &domain->acl_info_list, list) { + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { struct tomoyo_double_path_acl_record *acl; if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_DOUBLE_PATH_ACL) continue; @@ -995,7 +940,7 @@ static int tomoyo_update_double_path_acl break; } out: - up_write(&tomoyo_domain_acl_info_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_filename1); tomoyo_put_name(saved_filename2); kfree(new_entry); @@ -1040,11 +985,12 @@ static int tomoyo_check_double_path_acl( struct tomoyo_acl_info *ptr; const u8 perm = 1 << type; int error = -EPERM; + int idx; if (!tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE)) return 0; - down_read(&tomoyo_domain_acl_info_list_lock); - list_for_each_entry(ptr, &domain->acl_info_list, list) { + idx = tomoyo_lock(); + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { struct tomoyo_double_path_acl_record *acl; if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_DOUBLE_PATH_ACL) continue; @@ -1059,7 +1005,7 @@ static int tomoyo_check_double_path_acl( error = 0; break; } - up_read(&tomoyo_domain_acl_info_list_lock); + tomoyo_unlock(idx); return error; } @@ -1169,6 +1115,7 @@ int tomoyo_check_open_permission(struct struct tomoyo_path_info *buf; const u8 mode = tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE); const bool is_enforce = (mode == 3); + int idx; if (!mode || !path->mnt) return 0; @@ -1184,6 +1131,7 @@ int tomoyo_check_open_permission(struct if (!buf) goto out; error = 0; + idx = tomoyo_lock(); /* * If the filename is specified by "deny_rewrite" keyword, * we need to check "allow_rewrite" permission when the filename is not @@ -1203,6 +1151,7 @@ int tomoyo_check_open_permission(struct error = tomoyo_check_single_path_permission2(domain, TOMOYO_TYPE_TRUNCATE_ACL, buf, mode); + tomoyo_unlock(idx); out: tomoyo_free(buf); if (!is_enforce) @@ -1226,6 +1175,7 @@ int tomoyo_check_1path_perm(struct tomoy struct tomoyo_path_info *buf; const u8 mode = tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE); const bool is_enforce = (mode == 3); + int idx; if (!mode || !path->mnt) return 0; @@ -1243,8 +1193,10 @@ int tomoyo_check_1path_perm(struct tomoy tomoyo_fill_path_info(buf); } } + idx = tomoyo_lock(); error = tomoyo_check_single_path_permission2(domain, operation, buf, mode); + tomoyo_unlock(idx); out: tomoyo_free(buf); if (!is_enforce) @@ -1267,19 +1219,23 @@ int tomoyo_check_rewrite_permission(stru const u8 mode = tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE); const bool is_enforce = (mode == 3); struct tomoyo_path_info *buf; + int idx; if (!mode || !filp->f_path.mnt) return 0; buf = tomoyo_get_path(&filp->f_path); if (!buf) goto out; + idx = tomoyo_lock(); if (!tomoyo_is_no_rewrite_file(buf)) { error = 0; - goto out; + goto ok; } error = tomoyo_check_single_path_permission2(domain, TOMOYO_TYPE_REWRITE_ACL, buf, mode); + ok: + tomoyo_unlock(idx); out: tomoyo_free(buf); if (!is_enforce) @@ -1306,6 +1262,7 @@ int tomoyo_check_2path_perm(struct tomoy const u8 mode = tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE); const bool is_enforce = (mode == 3); const char *msg; + int idx; if (!mode || !path1->mnt || !path2->mnt) return 0; @@ -1329,10 +1286,11 @@ int tomoyo_check_2path_perm(struct tomoy } } } + idx = tomoyo_lock(); error = tomoyo_check_double_path_acl(domain, operation, buf1, buf2); msg = tomoyo_dp2keyword(operation); if (!error) - goto out; + goto ok; if (tomoyo_verbose_mode(domain)) printk(KERN_WARNING "TOMOYO-%s: Access '%s %s %s' " "denied for %s\n", tomoyo_get_msg(is_enforce), @@ -1344,6 +1302,8 @@ int tomoyo_check_2path_perm(struct tomoy tomoyo_update_double_path_acl(operation, name1, name2, domain, false); } + ok: + tomoyo_unlock(idx); out: tomoyo_free(buf1); tomoyo_free(buf2); --- security-testing-2.6.git.orig/security/tomoyo/realpath.c +++ security-testing-2.6.git/security/tomoyo/realpath.c @@ -15,6 +15,7 @@ #include <linux/fs_struct.h> #include "common.h" #include "realpath.h" +#include "tomoyo.h" /** * tomoyo_encode: Convert binary string to ascii string. @@ -223,6 +224,17 @@ bool tomoyo_memory_ok(void *ptr) return false; } +/** + * tomoyo_free_element - Free memory for elements. + * + * @ptr: Pointer to allocated memory. + */ +static void tomoyo_free_element(void *ptr) +{ + atomic_sub(ksize(ptr), &tomoyo_allocated_memory_for_elements); + kfree(ptr); +} + /* Memory allocated for string data in bytes. */ static atomic_t tomoyo_allocated_memory_for_savename; /* Quota for holding string data in bytes. */ @@ -238,15 +250,10 @@ static unsigned int tomoyo_quota_for_sav /* * tomoyo_name_entry is a structure which is used for linking * "struct tomoyo_path_info" into tomoyo_name_list . - * - * Since tomoyo_name_list manages a list of strings which are shared by - * multiple processes (whereas "struct tomoyo_path_info" inside - * "struct tomoyo_path_info_with_data" is not shared), a reference counter will - * be added to "struct tomoyo_name_entry" rather than "struct tomoyo_path_info" - * when TOMOYO starts supporting garbage collector. */ struct tomoyo_name_entry { struct list_head list; + atomic_t users; struct tomoyo_path_info entry; }; @@ -287,10 +294,11 @@ const struct tomoyo_path_info *tomoyo_ge entry = kmalloc(sizeof(*entry) + len, GFP_KERNEL); allocated_len = entry ? ksize(entry) : 0; mutex_lock(&tomoyo_name_list_lock); - list_for_each_entry(ptr, &tomoyo_name_list[hash % TOMOYO_MAX_HASH], - list) { + list_for_each_entry_rcu(ptr, &tomoyo_name_list[hash % TOMOYO_MAX_HASH], + list) { if (hash != ptr->entry.hash || strcmp(name, ptr->entry.name)) continue; + atomic_inc(&ptr->users); error = 0; break; } @@ -305,8 +313,9 @@ const struct tomoyo_path_info *tomoyo_ge ptr->entry.name = ((char *) ptr) + sizeof(*ptr); memmove((char *) ptr->entry.name, name, len); tomoyo_fill_path_info(&ptr->entry); - list_add_tail(&ptr->list, - &tomoyo_name_list[hash % TOMOYO_MAX_HASH]); + atomic_set(&ptr->users, 1); + list_add_tail_rcu(&ptr->list, + &tomoyo_name_list[hash % TOMOYO_MAX_HASH]); entry = NULL; error = 0; } @@ -321,6 +330,31 @@ const struct tomoyo_path_info *tomoyo_ge } /** + * tomoyo_put_name - Delete shared memory for string data. + * + * @ptr: Pointer to "struct tomoyo_path_info". + */ +void tomoyo_put_name(const struct tomoyo_path_info *name) +{ + struct tomoyo_name_entry *ptr; + bool can_delete = false; + + if (!name) + return; + ptr = container_of(name, struct tomoyo_name_entry, entry); + mutex_lock(&tomoyo_name_list_lock); + if (atomic_dec_and_test(&ptr->users)) { + list_del(&ptr->list); + can_delete = true; + } + mutex_unlock(&tomoyo_name_list_lock); + if (can_delete) { + atomic_sub(ksize(ptr), &tomoyo_allocated_memory_for_savename); + kfree(ptr); + } +} + +/** * tomoyo_realpath_init - Initialize realpath related code. */ void __init tomoyo_realpath_init(void) @@ -332,11 +366,11 @@ void __init tomoyo_realpath_init(void) INIT_LIST_HEAD(&tomoyo_name_list[i]); INIT_LIST_HEAD(&tomoyo_kernel_domain.acl_info_list); tomoyo_kernel_domain.domainname = tomoyo_get_name(TOMOYO_ROOT_NAME); - list_add_tail(&tomoyo_kernel_domain.list, &tomoyo_domain_list); - down_read(&tomoyo_domain_list_lock); + list_add_tail_rcu(&tomoyo_kernel_domain.list, &tomoyo_domain_list); + i = tomoyo_lock(); if (tomoyo_find_domain(TOMOYO_ROOT_NAME) != &tomoyo_kernel_domain) panic("Can't register tomoyo_kernel_domain"); - up_read(&tomoyo_domain_list_lock); + tomoyo_unlock(i); } /* Memory allocated for temporary purpose. */ @@ -431,3 +465,327 @@ int tomoyo_write_memory_quota(struct tom tomoyo_quota_for_elements = size; return 0; } + +/* Garbage collecter functions */ + +static inline void tomoyo_gc_del_domain_initializer +(struct tomoyo_domain_initializer_entry *ptr) +{ + tomoyo_put_name(ptr->domainname); + tomoyo_put_name(ptr->program); +} + +static inline void tomoyo_gc_del_domain_keeper +(struct tomoyo_domain_keeper_entry *ptr) +{ + tomoyo_put_name(ptr->domainname); + tomoyo_put_name(ptr->program); +} + +static inline void tomoyo_gc_del_alias(struct tomoyo_alias_entry *ptr) +{ + tomoyo_put_name(ptr->original_name); + tomoyo_put_name(ptr->aliased_name); +} + +static inline void tomoyo_gc_del_readable +(struct tomoyo_globally_readable_file_entry *ptr) +{ + tomoyo_put_name(ptr->filename); +} + +static inline void tomoyo_gc_del_pattern(struct tomoyo_pattern_entry *ptr) +{ + tomoyo_put_name(ptr->pattern); +} + +static inline void tomoyo_gc_del_no_rewrite +(struct tomoyo_no_rewrite_entry *ptr) +{ + tomoyo_put_name(ptr->pattern); +} + +static inline void tomoyo_gc_del_manager +(struct tomoyo_policy_manager_entry *ptr) +{ + tomoyo_put_name(ptr->manager); +} + +static void tomoyo_gc_del_acl(struct tomoyo_acl_info *acl) +{ + switch (tomoyo_acl_type1(acl)) { + struct tomoyo_single_path_acl_record *acl1; + struct tomoyo_double_path_acl_record *acl2; + case TOMOYO_TYPE_SINGLE_PATH_ACL: + acl1 = container_of(acl, struct tomoyo_single_path_acl_record, + head); + tomoyo_put_name(acl1->filename); + break; + case TOMOYO_TYPE_DOUBLE_PATH_ACL: + acl2 = container_of(acl, struct tomoyo_double_path_acl_record, + head); + tomoyo_put_name(acl2->filename1); + tomoyo_put_name(acl2->filename2); + break; + } +} + +static bool tomoyo_gc_del_domain(struct tomoyo_domain_info *domain) +{ + struct tomoyo_acl_info *acl; + struct tomoyo_acl_info *tmp; + /* + * We need to recheck domain->users because + * tomoyo_find_next_domain() increments it. + */ + if (atomic_read(&domain->users)) + return false; + /* Delete all entries in this domain. */ + list_for_each_entry_safe(acl, tmp, &domain->acl_info_list, list) { + list_del_rcu(&acl->list); + tomoyo_gc_del_acl(acl); + tomoyo_free_element(acl); + } + tomoyo_put_name(domain->domainname); + return true; +} + +enum tomoyo_gc_id { + TOMOYO_ID_DOMAIN_INITIALIZER, + TOMOYO_ID_DOMAIN_KEEPER, + TOMOYO_ID_ALIAS, + TOMOYO_ID_GLOBALLY_READABLE, + TOMOYO_ID_PATTERN, + TOMOYO_ID_NO_REWRITE, + TOMOYO_ID_MANAGER, + TOMOYO_ID_ACL, + TOMOYO_ID_DOMAIN +}; + +struct tomoyo_gc_entry { + struct list_head list; + int type; + void *element; +}; + + +/* Caller holds tomoyo_policy_lock mutex. */ +static bool tomoyo_add_to_gc(const int type, void *element, + struct list_head *head) +{ + struct tomoyo_gc_entry *entry = kmalloc(sizeof(*entry), GFP_ATOMIC); + if (!entry) + return false; + entry->type = type; + entry->element = element; + list_add(&entry->list, head); + return true; +} + +/** + * tomoyo_gc_thread_main - Garbage collector thread for TOMOYO. + * + * @unused: Not used. + * + * This function is exclusively executed. + */ +static int tomoyo_gc_thread_main(void *unused) +{ + static DEFINE_MUTEX(tomoyo_gc_mutex); + static LIST_HEAD(tomoyo_gc_queue); + if (!mutex_trylock(&tomoyo_gc_mutex)) + return 0; + + mutex_lock(&tomoyo_policy_lock); + { + struct tomoyo_globally_readable_file_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_globally_readable_list, + list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_GLOBALLY_READABLE, ptr, + &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_pattern_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_pattern_list, list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_PATTERN, ptr, + &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_no_rewrite_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_no_rewrite_list, list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_NO_REWRITE, ptr, + &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_domain_initializer_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_domain_initializer_list, + list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_DOMAIN_INITIALIZER, + ptr, &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_domain_keeper_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_domain_keeper_list, + list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_DOMAIN_KEEPER, ptr, + &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_alias_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_alias_list, list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_ALIAS, ptr, + &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_policy_manager_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_policy_manager_list, + list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_MANAGER, ptr, + &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_domain_info *domain; + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { + struct tomoyo_acl_info *acl; + list_for_each_entry_rcu(acl, &domain->acl_info_list, + list) { + if (!(acl->type & TOMOYO_ACL_DELETED)) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_ACL, acl, + &tomoyo_gc_queue)) + list_del_rcu(&acl->list); + else + break; + } + if (domain->is_deleted && + !atomic_read(&domain->users)) { + if (tomoyo_add_to_gc(TOMOYO_ID_DOMAIN, domain, + &tomoyo_gc_queue)) + list_del_rcu(&domain->list); + else + break; + } + } + } + mutex_unlock(&tomoyo_policy_lock); + if (list_empty(&tomoyo_gc_queue)) + goto done; + { + /* Swap active counter. */ + const int idx = atomic_read(&tomoyo_users_counter_idx); + atomic_set(&tomoyo_users_counter_idx, idx ^ 1); + /* + * Wait for readers who are using previously active counter. + * This is similar to synchronize_rcu() while this code allows + * readers to do operations which may sleep. + */ + while (atomic_read(&tomoyo_users_counter[idx])) + msleep(1000); + } + { + /* + * Nobody is using previously active counter. + * Ready to release memory of elements removed from the list + * during previously active counter was active. + */ + struct tomoyo_gc_entry *p; + struct tomoyo_gc_entry *tmp; + list_for_each_entry_safe(p, tmp, &tomoyo_gc_queue, list) { + switch (p->type) { + case TOMOYO_ID_DOMAIN_INITIALIZER: + tomoyo_gc_del_domain_initializer(p->element); + break; + case TOMOYO_ID_DOMAIN_KEEPER: + tomoyo_gc_del_domain_keeper(p->element); + break; + case TOMOYO_ID_ALIAS: + tomoyo_gc_del_alias(p->element); + break; + case TOMOYO_ID_GLOBALLY_READABLE: + tomoyo_gc_del_readable(p->element); + break; + case TOMOYO_ID_PATTERN: + tomoyo_gc_del_pattern(p->element); + break; + case TOMOYO_ID_NO_REWRITE: + tomoyo_gc_del_no_rewrite(p->element); + break; + case TOMOYO_ID_MANAGER: + tomoyo_gc_del_manager(p->element); + break; + case TOMOYO_ID_ACL: + tomoyo_gc_del_acl(p->element); + break; + case TOMOYO_ID_DOMAIN: + if (!tomoyo_gc_del_domain(p->element)) + continue; + break; + } + tomoyo_free_element(p->element); + list_del(&p->list); + kfree(p); + } + } + done: + mutex_unlock(&tomoyo_gc_mutex); + return 0; +} + +/** + * tomoyo_gc_thread - Garbage collector thread for TOMOYO. + * + * @unused: Not used. + */ +int tomoyo_gc_thread(void *unused) +{ + /* + * Maybe this thread should be created and terminated as needed + * rather than created upon boot and living forever... + */ + while (1) { + msleep(30000); + tomoyo_gc_thread_main(unused); + } +} --- security-testing-2.6.git.orig/security/tomoyo/realpath.h +++ security-testing-2.6.git/security/tomoyo/realpath.h @@ -44,10 +44,7 @@ bool tomoyo_memory_ok(void *ptr); * The RAM is shared, so NEVER try to modify or kfree() the returned name. */ const struct tomoyo_path_info *tomoyo_get_name(const char *name); -static inline void tomoyo_put_name(const struct tomoyo_path_info *name) -{ - /* It's a dummy so far. */ -} +void tomoyo_put_name(const struct tomoyo_path_info *name); /* Allocate memory for temporary use (e.g. permission checks). */ void *tomoyo_alloc(const size_t size); --- security-testing-2.6.git.orig/security/tomoyo/tomoyo.c +++ security-testing-2.6.git/security/tomoyo/tomoyo.c @@ -22,9 +22,19 @@ static int tomoyo_cred_prepare(struct cr * we don't need to duplicate. */ new->security = old->security; + if (new->security) + atomic_inc(&((struct tomoyo_domain_info *) + new->security)->users); return 0; } +static void tomoyo_cred_free(struct cred *cred) +{ + struct tomoyo_domain_info *domain = cred->security; + if (domain) + atomic_dec(&domain->users); +} + static int tomoyo_bprm_set_creds(struct linux_binprm *bprm) { int rc; @@ -49,7 +59,11 @@ static int tomoyo_bprm_set_creds(struct * Tell tomoyo_bprm_check_security() is called for the first time of an * execve operation. */ - bprm->cred->security = NULL; + if (bprm->cred->security) { + atomic_dec(&((struct tomoyo_domain_info *) + bprm->cred->security)->users); + bprm->cred->security = NULL; + } return 0; } @@ -263,6 +277,7 @@ static int tomoyo_dentry_open(struct fil static struct security_operations tomoyo_security_ops = { .name = "tomoyo", .cred_prepare = tomoyo_cred_prepare, + .cred_free = tomoyo_cred_free, .bprm_set_creds = tomoyo_bprm_set_creds, .bprm_check_security = tomoyo_bprm_check_security, #ifdef CONFIG_SYSCTL @@ -291,6 +306,7 @@ static int __init tomoyo_init(void) panic("Failure registering TOMOYO Linux"); printk(KERN_INFO "TOMOYO Linux initialized\n"); cred->security = &tomoyo_kernel_domain; + atomic_inc(&tomoyo_kernel_domain.users); tomoyo_realpath_init(); return 0; } ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] TOMOYO: Add garbage collector support. (v3) 2009-06-17 11:19 [PATCH] TOMOYO: Add garbage collector support. (v3) Tetsuo Handa ` (2 preceding siblings ...) 2009-06-17 11:23 ` [PATCH 3/3] TOMOYO: Add RCU-like garbage collector Tetsuo Handa @ 2009-06-17 12:28 ` Peter Zijlstra 2009-06-17 16:31 ` Paul E. McKenney 4 siblings, 0 replies; 16+ messages in thread From: Peter Zijlstra @ 2009-06-17 12:28 UTC (permalink / raw) To: Tetsuo Handa; +Cc: linux-security-module, linux-kernel, paulmck, Ingo Molnar On Wed, 2009-06-17 at 20:19 +0900, Tetsuo Handa wrote: > Paul E. McKenney wrote ( http://lkml.org/lkml/2009/5/27/2 ) : > > I would also recommend the three-part LWN series as a starting point: > > > > # http://lwn.net/Articles/262464/ (What is RCU, Fundamentally?) > > # http://lwn.net/Articles/263130/ (What is RCU's Usage?) > > # http://lwn.net/Articles/264090/ (What is RCU's API?) > I've read these articles. They are very good. > > I came up with an idea that we may be able to implement GC while readers are > permitted to sleep but no read locks are required. > > The idea is to have two counters which hold the number of readers currently > reading the list, one is active and the other is inactive. Reader increments > the currently active counter before starts reading and decrements that counter > after finished reading. GC swaps active counter and inactive counter and waits > for previously active counter's count to become 0 before releasing elements > removed from the list. > Code is shown below. > > atomic_t users_counter[2]; > atomic_t users_counter_idx; > DEFINE_MUTEX(updator_mutex); > DEFINE_MUTEX(gc_mutex); Sounds like an utter scalability nightmare to me though. Why not 'simply' use SRCU or always provide an preemptible RCU domain using: rcu_read_lock_preempt() rcu_read_unlock_preempt() call_rcu_preempt() etc. along with the already existing *{,_bh,_sched} variants That way PREEMPT_RCU would only affect the implementation of the regular RCU implementation, it being either _sched or _preempt. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] TOMOYO: Add garbage collector support. (v3) 2009-06-17 11:19 [PATCH] TOMOYO: Add garbage collector support. (v3) Tetsuo Handa ` (3 preceding siblings ...) 2009-06-17 12:28 ` [PATCH] TOMOYO: Add garbage collector support. (v3) Peter Zijlstra @ 2009-06-17 16:31 ` Paul E. McKenney 2009-06-18 5:34 ` Tetsuo Handa 4 siblings, 1 reply; 16+ messages in thread From: Paul E. McKenney @ 2009-06-17 16:31 UTC (permalink / raw) To: Tetsuo Handa; +Cc: linux-security-module, linux-kernel On Wed, Jun 17, 2009 at 08:19:07PM +0900, Tetsuo Handa wrote: > Hello. > > This patchset adds garbage collector for TOMOYO. > This time, I'm using some sort of RCU-like approach instead of cookie-list > approach. > > TOMOYO 1/3: Move sleeping operations to outside the semaphore. > 6 files changed, 231 insertions(+), 345 deletions(-) > > TOMOYO 2/3: Replace tomoyo_save_name() with tomoyo_get_name()/tomoyo_put_name(). > 5 files changed, 70 insertions(+), 23 deletions(-) > > TOMOYO 3/3: Add RCU-like garbage collector. > 7 files changed, 733 insertions(+), 358 deletions(-) > > Paul E. McKenney wrote ( http://lkml.org/lkml/2009/5/27/2 ) : > > I would also recommend the three-part LWN series as a starting point: > > > > # http://lwn.net/Articles/262464/ (What is RCU, Fundamentally?) > > # http://lwn.net/Articles/263130/ (What is RCU's Usage?) > > # http://lwn.net/Articles/264090/ (What is RCU's API?) > > I've read these articles. They are very good. Glad that they were helpful!!! > I came up with an idea that we may be able to implement GC while readers are > permitted to sleep but no read locks are required. I believe you have a bug in your pseudocode -- please see below. > The idea is to have two counters which hold the number of readers currently > reading the list, one is active and the other is inactive. Reader increments > the currently active counter before starts reading and decrements that counter > after finished reading. GC swaps active counter and inactive counter and waits > for previously active counter's count to become 0 before releasing elements > removed from the list. > Code is shown below. > > atomic_t users_counter[2]; > atomic_t users_counter_idx; > DEFINE_MUTEX(updator_mutex); > DEFINE_MUTEX(gc_mutex); > > --- reader --- > { > /* Get counter index. */ > int idx = atomic_read(&users_counter_idx); > /* Lock counter. */ > atomic_inc(&users_counter[idx]); > list_for_each_entry_rcu() { > ... /* Allowed to sleep. */ > } > /* Unlock counter. */ > atomic_dec(&users_counter[idx]); > } > > --- writer --- > { > bool found = false; > /* Get lock for writing. */ > mutex_lock(&updater_mutex); > list_for_each_entry_rcu() { > if (...) > continue; > found = true; > break; > } > if (!found) > list_add_rcu(element); > /* Release lock for writing. */ > mutex_unlock(&updater_mutex); > } > > --- garbage collector --- > { > bool element_deleted = false; > /* Protect the counters from concurrent GC threads. */ > mutex_lock(&gc_mutex); > /* Get lock for writing. */ > mutex_lock(&updater_mutex); > list_for_each_entry_rcu() { > if (...) > continue; > list_del_rcu(element); > element_deleted = true; > break; > } > /* Release lock for writing. */ > mutex_unlock(&updater_mutex); > if (element_deleted) { > /* Swap active counter. */ > const int idx = atomic_read(&users_counter_idx); > atomic_set(&users_counter_idx, idx ^ 1); > /* > * Wait for readers who are using previously active counter. > * This is similar to synchronize_rcu() while this code allows > * readers to do operations which may sleep. > */ > while (atomic_read(&users_counter[idx])) > msleep(1000); > /* > * Nobody is using previously active counter. > * Ready to release memory of elements removed before > * previously active counter became inactive. > */ > kfree(element); > } > mutex_unlock(&gc_mutex); > } Consider the following sequence of events: o CPU 0 picks up users_counter_idx int local variable idx. Let's assume that the value is zero. o CPU 0 is now preempted, interrupted, or otherwise delayed. o CPU 1 starts garbage collection, finding some elements to delete, thus setting "element_deleted" to true. o CPU 1 continues garbage collection, inverting the value of users_counter_idx, so that the value is now one, waiting for the value-zero readers, and freeing up the old elements. o CPU 0 continues execution, first atomically incrementing users_counter[0], then traversing the list, possibly sleeping. o CPU 2 starts a new round of garbage collection, again finding some elements to delete, and thus again setting "elements_deleted" to true. One of the elements deleted is the one that CPU 0 is currently referencing while asleep. o CPU 2 continues garbage collection, inverting the value of users_counter_idx, so that the value is now zero, waiting for the value-one readers, and freeing up the old elements. Note that CPU 0 is a value-zero reader, so that CPU 2 will not wait on it. CPU 2 therefore kfree()s the element that CPU 0 is currently referencing. o CPU 0 wakes up, and suffers possibly fatal disappointment upon attempting to reference an element that has been freed -- and, worse yet, possibly re-allocated as some other type of structure. Or am I missing something in your pseudocode? Also, if you have lots of concurrent readers, you can suffer high memory contention on the users_counter[] array, correct? I recommend that you look into use of SRCU in this case. There is some documentation at http://lwn.net/Articles/202847/, with an revised version incorporating feedback from the LWN comments available at: http://www.rdrop.com/users/paulmck/RCU/srcu.2007.01.14a.pdf Well, all but one of the LWN comments -- someone posted one a couple of months ago that I just now noticed. Anyway, the general approach would be to make changes to your code roughly as follows: 1. replace your users_counter and users_counter_idx with a struct srcu_struct. 2. In the reader, replace the fetch from users_counter_idx and the atomic_inc() with srcu_read_lock(). 3. In the garbage collector, replace the fetch/update of users_counter_idx and the "while" loop with synchronize_srcu(). > In this idea, GC's kfree() call may be deferred for unknown duration, but > defer duration will not matter if we use a dedicated kernel thread for GC. > > I noticed that there is QRCU in the "RCU has a Family of Wait-to-Finish APIs" > section. My idea seems to resemble QRCU except grace periods. > But "Availability" field is empty. Oh, what happened to QRCU? Last I knew, it was queued up in Jens Axboe's tree, awaiting a first user. But the approach you have above looks to me like it will do fine with SRCU. Or is there some reason why SRCU does not work for you? Thanx, Paul ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] TOMOYO: Add garbage collector support. (v3) 2009-06-17 16:31 ` Paul E. McKenney @ 2009-06-18 5:34 ` Tetsuo Handa 2009-06-18 6:45 ` [PATCH 3/3] TOMOYO: Add SRCU based garbage collector Tetsuo Handa 2009-06-18 15:28 ` [PATCH] TOMOYO: Add garbage collector support. (v3) Paul E. McKenney 0 siblings, 2 replies; 16+ messages in thread From: Tetsuo Handa @ 2009-06-18 5:34 UTC (permalink / raw) To: paulmck; +Cc: linux-security-module, linux-kernel Hello. Paul E. McKenney wrote: > Consider the following sequence of events: > > o CPU 0 picks up users_counter_idx int local variable idx. > Let's assume that the value is zero. > > o CPU 0 is now preempted, interrupted, or otherwise delayed. > > o CPU 1 starts garbage collection, finding some elements to > delete, thus setting "element_deleted" to true. > > o CPU 1 continues garbage collection, inverting the value of > users_counter_idx, so that the value is now one, waiting > for the value-zero readers, and freeing up the old elements. > > o CPU 0 continues execution, first atomically incrementing > users_counter[0], then traversing the list, possibly sleeping. > > o CPU 2 starts a new round of garbage collection, again finding > some elements to delete, and thus again setting > "elements_deleted" to true. One of the elements deleted > is the one that CPU 0 is currently referencing while asleep. > No. CPU 2 can't start a new round of GC because GC function is exclusively executed because of gc_mutex mutex. > o CPU 2 continues garbage collection, inverting the value of > users_counter_idx, so that the value is now zero, waiting > for the value-one readers, and freeing up the old elements. > Note that CPU 0 is a value-zero reader, so that CPU 2 will > not wait on it. > > CPU 2 therefore kfree()s the element that CPU 0 is currently > referencing. > CPU 2 won't continue GC, for CPU 2 can't start a new round of GC. > o CPU 0 wakes up, and suffers possibly fatal disappointment upon > attempting to reference an element that has been freed -- and, > worse yet, possibly re-allocated as some other type of > structure. > CPU 0 won't suffer, for first round of GC (by CPU 1) prevents CPU 2 from starting a new round of GC. > Or am I missing something in your pseudocode? I think you missed that GC function is executed exclusively. The race between readers and GC is avoided as below. (a-1) A reader reads users_counter_idx and saves to r_idx (a-2) GC removes element from the list using RCU (a-3) GC reads users_counter_idx and saves to g_idx (a-4) GC inverts users_counter_idx (a-5) GC releases the removed element (a-6) A reader increments users_counter[r_idx] (a-7) A reader won't see the element removed by GC because the reader has not started list traversal as of (a-2) (b-1) A reader reads users_counter_idx and saves to r_idx (b-2) A reader increments users_counter[r_idx] (b-3) GC removes element from the list using RCU (b-4) A reader won't see the element removed by GC (b-5) GC reads users_counter_idx and saves to g_idx (b-6) GC inverts users_counter_idx (b-7) GC waits for users_counter[g_idx] to become 0 (b-8) A reader decrements users_counter[r_idx] (b-9) GC releases the removed element (c-1) A reader reads users_counter_idx and saves to r_idx (c-2) A reader increments users_counter[r_idx] (c-3) A reader sees the element (c-4) GC removes element from the list using RCU (c-5) GC reads users_counter_idx and saves to g_idx (c-6) GC inverts users_counter_idx (c-7) GC waits for users_counter[g_idx] to become 0 (c-8) A reader decrements users_counter[r_idx] (c-9) GC releases the removed element What I worry is that some memory barriers might be needed between > > { > > /* Get counter index. */ > > int idx = atomic_read(&users_counter_idx); > > /* Lock counter. */ > > atomic_inc(&users_counter[idx]); - here - > > list_for_each_entry_rcu() { > > ... /* Allowed to sleep. */ > > } - here - > > /* Unlock counter. */ > > atomic_dec(&users_counter[idx]); > > } and > > if (element_deleted) { > > /* Swap active counter. */ > > const int idx = atomic_read(&users_counter_idx); - here - > > atomic_set(&users_counter_idx, idx ^ 1); - here - > > /* > > * Wait for readers who are using previously active counter. > > * This is similar to synchronize_rcu() while this code allows > > * readers to do operations which may sleep. > > */ > > while (atomic_read(&users_counter[idx])) > > msleep(1000); > > /* > > * Nobody is using previously active counter. > > * Ready to release memory of elements removed before > > * previously active counter became inactive. > > */ > > kfree(element); > > } in order to enforce ordering. > Also, if you have lots of concurrent readers, you can suffer high memory > contention on the users_counter[] array, correct? Excuse me. I couldn't understand "memory contention"... ( http://www.answers.com/topic/memory-contention ) | A situation in which two different programs, or two parts of a program, | try to read items in the same block of memory at the same time. Why suffered by atomic_read() at the same time? Cache invalidation by atomic_inc()/atomic_dec() a shared variable? ( http://wiki.answers.com/Q/What_is_memory_contention ) | Memory contention is a state a OS memory manager can reside in when to many | memory requests (alloc, realloc, free) are issued to it from an active | application possibly leading to a DOS condition specific to that | application. No memory allocation for users_counter[] array. > I recommend that you look into use of SRCU in this case. I have one worry regarding SRCU. Inside synchronize_srcu(), there is a loop while (srcu_readers_active_idx(sp, idx)) schedule_timeout_interruptible(1); but the reader's sleeping duration varies from less than one second to more than hours. Checking for counters for every jiffies sounds too much waste of CPU. Delaying kfree() for seconds or minutes won't cause troubles for TOMOYO. It would be nice if checking interval is configurable like "schedule_timeout_interruptible(sp->timeout);". > Anyway, the general approach would be to make changes to your code > roughly as follows: > > 1. replace your users_counter and users_counter_idx with a > struct srcu_struct. > > 2. In the reader, replace the fetch from users_counter_idx and > the atomic_inc() with srcu_read_lock(). > > 3. In the garbage collector, replace the fetch/update of > users_counter_idx and the "while" loop with synchronize_srcu(). > I see. Since I isolated the GC as a dedicated kernel thread, writers no longer wait for elements to be kfree()ed. I can use SRCU. > Or is there some reason why SRCU does not work for you? None for mainline version. I'm also maintaining TOMOYO for older/distributor kernels for those who want to enable both SELinux/SMACK/AppArmor/grsecurity etc. and TOMOYO at the same time. Thus, if my idea works, I want to backport it to TOMOYO for these kernels. Regards. ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 3/3] TOMOYO: Add SRCU based garbage collector. 2009-06-18 5:34 ` Tetsuo Handa @ 2009-06-18 6:45 ` Tetsuo Handa 2009-06-18 16:05 ` Paul E. McKenney 2009-06-18 15:28 ` [PATCH] TOMOYO: Add garbage collector support. (v3) Paul E. McKenney 1 sibling, 1 reply; 16+ messages in thread From: Tetsuo Handa @ 2009-06-18 6:45 UTC (permalink / raw) To: paulmck; +Cc: linux-security-module, linux-kernel Tetsuo Handa wrote: > I have one worry regarding SRCU. > Inside synchronize_srcu(), there is a loop > > while (srcu_readers_active_idx(sp, idx)) > schedule_timeout_interruptible(1); > > but the reader's sleeping duration varies from less than one second to > more than hours. > > Checking for counters for every jiffies sounds too much waste of CPU. > Delaying kfree() for seconds or minutes won't cause troubles for TOMOYO. > It would be nice if checking interval is configurable like > "schedule_timeout_interruptible(sp->timeout);". > Well, GC thread's schedule_timeout_interruptible(1); loop does not appear on /usr/bin/top , thus I don't need to worry about checking interval. OK. Here is SRCU version. ------------------------------ Subject: [PATCH 3/3] TOMOYO: Add SRCU based garbage collector. As of now, TOMOYO cannot release memory used by marked-as-deleted list elements because TOMOYO does not know how many readers are there. This patch adds SRCU based garbage collector. Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> --- security/tomoyo/common.c | 124 ++++++-------- security/tomoyo/common.h | 180 ++++++++++++++++++++- security/tomoyo/domain.c | 191 ++++------------------- security/tomoyo/file.c | 174 ++++++++------------ security/tomoyo/realpath.c | 373 +++++++++++++++++++++++++++++++++++++++++++-- security/tomoyo/realpath.h | 5 security/tomoyo/tomoyo.c | 18 ++ 7 files changed, 707 insertions(+), 358 deletions(-) --- security-testing-2.6.git.orig/security/tomoyo/common.c +++ security-testing-2.6.git/security/tomoyo/common.c @@ -12,6 +12,7 @@ #include <linux/uaccess.h> #include <linux/security.h> #include <linux/hardirq.h> +#include <linux/kthread.h> #include "realpath.h" #include "common.h" #include "tomoyo.h" @@ -340,10 +341,9 @@ bool tomoyo_is_domain_def(const unsigned * * @domainname: The domainname to find. * - * Caller must call down_read(&tomoyo_domain_list_lock); or - * down_write(&tomoyo_domain_list_lock); . - * * Returns pointer to "struct tomoyo_domain_info" if found, NULL otherwise. + * + * Caller holds srcu_read_lock(&tomoyo_ss). */ struct tomoyo_domain_info *tomoyo_find_domain(const char *domainname) { @@ -352,7 +352,7 @@ struct tomoyo_domain_info *tomoyo_find_d name.name = domainname; tomoyo_fill_path_info(&name); - list_for_each_entry(domain, &tomoyo_domain_list, list) { + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { if (!domain->is_deleted && !tomoyo_pathcmp(&name, domain->domainname)) return domain; @@ -788,6 +788,8 @@ bool tomoyo_verbose_mode(const struct to * @domain: Pointer to "struct tomoyo_domain_info". * * Returns true if the domain is not exceeded quota, false otherwise. + * + * Caller holds srcu_read_lock(&tomoyo_ss). */ bool tomoyo_domain_quota_is_ok(struct tomoyo_domain_info * const domain) { @@ -796,8 +798,7 @@ bool tomoyo_domain_quota_is_ok(struct to if (!domain) return true; - down_read(&tomoyo_domain_acl_info_list_lock); - list_for_each_entry(ptr, &domain->acl_info_list, list) { + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { if (ptr->type & TOMOYO_ACL_DELETED) continue; switch (tomoyo_acl_type2(ptr)) { @@ -850,7 +851,6 @@ bool tomoyo_domain_quota_is_ok(struct to break; } } - up_read(&tomoyo_domain_acl_info_list_lock); if (count < tomoyo_check_flags(domain, TOMOYO_MAX_ACCEPT_ENTRY)) return true; if (!domain->quota_warned) { @@ -1029,27 +1029,6 @@ static int tomoyo_read_profile(struct to } /* - * tomoyo_policy_manager_entry is a structure which is used for holding list of - * domainnames or programs which are permitted to modify configuration via - * /sys/kernel/security/tomoyo/ interface. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_policy_manager_list . - * (2) "manager" is a domainname or a program's pathname. - * (3) "is_domain" is a bool which is true if "manager" is a domainname, false - * otherwise. - * (4) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - */ -struct tomoyo_policy_manager_entry { - struct list_head list; - /* A path to program or a domainname. */ - const struct tomoyo_path_info *manager; - bool is_domain; /* True if manager is a domainname. */ - bool is_deleted; /* True if this entry is deleted. */ -}; - -/* * tomoyo_policy_manager_list is used for holding list of domainnames or * programs which are permitted to modify configuration via * /sys/kernel/security/tomoyo/ interface. @@ -1079,8 +1058,7 @@ struct tomoyo_policy_manager_entry { * * # cat /sys/kernel/security/tomoyo/manager */ -static LIST_HEAD(tomoyo_policy_manager_list); -static DECLARE_RWSEM(tomoyo_policy_manager_list_lock); +LIST_HEAD(tomoyo_policy_manager_list); /** * tomoyo_update_manager_entry - Add a manager entry. @@ -1112,8 +1090,8 @@ static int tomoyo_update_manager_entry(c return -ENOMEM; if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_policy_manager_list_lock); - list_for_each_entry(ptr, &tomoyo_policy_manager_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_policy_manager_list, list) { if (ptr->manager != saved_manager) continue; ptr->is_deleted = is_delete; @@ -1124,11 +1102,12 @@ static int tomoyo_update_manager_entry(c new_entry->manager = saved_manager; saved_manager = NULL; new_entry->is_domain = is_domain; - list_add_tail(&new_entry->list, &tomoyo_policy_manager_list); + list_add_tail_rcu(&new_entry->list, + &tomoyo_policy_manager_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_policy_manager_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_manager); kfree(new_entry); return error; @@ -1167,9 +1146,8 @@ static int tomoyo_read_manager_policy(st if (head->read_eof) return 0; - down_read(&tomoyo_policy_manager_list_lock); - list_for_each_cookie(pos, head->read_var2, - &tomoyo_policy_manager_list) { + list_for_each_cookie_rcu(pos, head->read_var2, + &tomoyo_policy_manager_list) { struct tomoyo_policy_manager_entry *ptr; ptr = list_entry(pos, struct tomoyo_policy_manager_entry, list); @@ -1179,7 +1157,6 @@ static int tomoyo_read_manager_policy(st if (!done) break; } - up_read(&tomoyo_policy_manager_list_lock); head->read_eof = done; return 0; } @@ -1189,6 +1166,8 @@ static int tomoyo_read_manager_policy(st * * Returns true if the current process is permitted to modify policy * via /sys/kernel/security/tomoyo/ interface. + * + * Caller holds srcu_read_lock(&tomoyo_ss). */ static bool tomoyo_is_policy_manager(void) { @@ -1202,29 +1181,25 @@ static bool tomoyo_is_policy_manager(voi return true; if (!tomoyo_manage_by_non_root && (task->cred->uid || task->cred->euid)) return false; - down_read(&tomoyo_policy_manager_list_lock); - list_for_each_entry(ptr, &tomoyo_policy_manager_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_policy_manager_list, list) { if (!ptr->is_deleted && ptr->is_domain && !tomoyo_pathcmp(domainname, ptr->manager)) { found = true; break; } } - up_read(&tomoyo_policy_manager_list_lock); if (found) return true; exe = tomoyo_get_exe(); if (!exe) return false; - down_read(&tomoyo_policy_manager_list_lock); - list_for_each_entry(ptr, &tomoyo_policy_manager_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_policy_manager_list, list) { if (!ptr->is_deleted && !ptr->is_domain && !strcmp(exe, ptr->manager->name)) { found = true; break; } } - up_read(&tomoyo_policy_manager_list_lock); if (!found) { /* Reduce error messages. */ static pid_t last_pid; const pid_t pid = current->pid; @@ -1245,6 +1220,8 @@ static bool tomoyo_is_policy_manager(voi * @data: String to parse. * * Returns true on success, false otherwise. + * + * Caller holds srcu_read_lock(&tomoyo_ss). */ static bool tomoyo_is_select_one(struct tomoyo_io_buffer *head, const char *data) @@ -1260,11 +1237,8 @@ static bool tomoyo_is_select_one(struct domain = tomoyo_real_domain(p); read_unlock(&tasklist_lock); } else if (!strncmp(data, "domain=", 7)) { - if (tomoyo_is_domain_def(data + 7)) { - down_read(&tomoyo_domain_list_lock); + if (tomoyo_is_domain_def(data + 7)) domain = tomoyo_find_domain(data + 7); - up_read(&tomoyo_domain_list_lock); - } } else return false; head->write_var1 = domain; @@ -1278,13 +1252,11 @@ static bool tomoyo_is_select_one(struct if (domain) { struct tomoyo_domain_info *d; head->read_var1 = NULL; - down_read(&tomoyo_domain_list_lock); - list_for_each_entry(d, &tomoyo_domain_list, list) { + list_for_each_entry_rcu(d, &tomoyo_domain_list, list) { if (d == domain) break; head->read_var1 = &d->list; } - up_read(&tomoyo_domain_list_lock); head->read_var2 = NULL; head->read_bit = 0; head->read_step = 0; @@ -1300,6 +1272,8 @@ static bool tomoyo_is_select_one(struct * @domainname: The name of domain. * * Returns 0. + * + * Caller holds srcu_read_lock(&tomoyo_ss). */ static int tomoyo_delete_domain(char *domainname) { @@ -1308,9 +1282,9 @@ static int tomoyo_delete_domain(char *do name.name = domainname; tomoyo_fill_path_info(&name); - down_write(&tomoyo_domain_list_lock); + mutex_lock(&tomoyo_policy_lock); /* Is there an active domain? */ - list_for_each_entry(domain, &tomoyo_domain_list, list) { + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { /* Never delete tomoyo_kernel_domain */ if (domain == &tomoyo_kernel_domain) continue; @@ -1320,7 +1294,7 @@ static int tomoyo_delete_domain(char *do domain->is_deleted = true; break; } - up_write(&tomoyo_domain_list_lock); + mutex_unlock(&tomoyo_policy_lock); return 0; } @@ -1330,6 +1304,8 @@ static int tomoyo_delete_domain(char *do * @head: Pointer to "struct tomoyo_io_buffer". * * Returns 0 on success, negative value otherwise. + * + * Caller holds srcu_read_lock(&tomoyo_ss). */ static int tomoyo_write_domain_policy(struct tomoyo_io_buffer *head) { @@ -1352,11 +1328,9 @@ static int tomoyo_write_domain_policy(st domain = NULL; if (is_delete) tomoyo_delete_domain(data); - else if (is_select) { - down_read(&tomoyo_domain_list_lock); + else if (is_select) domain = tomoyo_find_domain(data); - up_read(&tomoyo_domain_list_lock); - } else + else domain = tomoyo_find_or_assign_new_domain(data, 0); head->write_var1 = domain; return 0; @@ -1511,8 +1485,7 @@ static int tomoyo_read_domain_policy(str return 0; if (head->read_step == 0) head->read_step = 1; - down_read(&tomoyo_domain_list_lock); - list_for_each_cookie(dpos, head->read_var1, &tomoyo_domain_list) { + list_for_each_cookie_rcu(dpos, head->read_var1, &tomoyo_domain_list) { struct tomoyo_domain_info *domain; const char *quota_exceeded = ""; const char *transition_failed = ""; @@ -1543,9 +1516,8 @@ acl_loop: if (head->read_step == 3) goto tail_mark; /* Print ACL entries in the domain. */ - down_read(&tomoyo_domain_acl_info_list_lock); - list_for_each_cookie(apos, head->read_var2, - &domain->acl_info_list) { + list_for_each_cookie_rcu(apos, head->read_var2, + &domain->acl_info_list) { struct tomoyo_acl_info *ptr = list_entry(apos, struct tomoyo_acl_info, list); @@ -1553,7 +1525,6 @@ acl_loop: if (!done) break; } - up_read(&tomoyo_domain_acl_info_list_lock); if (!done) break; head->read_step = 3; @@ -1565,7 +1536,6 @@ tail_mark: if (head->read_single_domain) break; } - up_read(&tomoyo_domain_list_lock); head->read_eof = done; return 0; } @@ -1581,6 +1551,8 @@ tail_mark: * * ( echo "select " $domainname; echo "use_profile " $profile ) | * /usr/lib/ccs/loadpolicy -d + * + * Caller holds srcu_read_lock(&tomoyo_ss). */ static int tomoyo_write_domain_profile(struct tomoyo_io_buffer *head) { @@ -1592,9 +1564,7 @@ static int tomoyo_write_domain_profile(s if (!cp) return -EINVAL; *cp = '\0'; - down_read(&tomoyo_domain_list_lock); domain = tomoyo_find_domain(cp + 1); - up_read(&tomoyo_domain_list_lock); if (strict_strtoul(data, 10, &profile)) return -EINVAL; if (domain && profile < TOMOYO_MAX_PROFILES @@ -1624,8 +1594,7 @@ static int tomoyo_read_domain_profile(st if (head->read_eof) return 0; - down_read(&tomoyo_domain_list_lock); - list_for_each_cookie(pos, head->read_var1, &tomoyo_domain_list) { + list_for_each_cookie_rcu(pos, head->read_var1, &tomoyo_domain_list) { struct tomoyo_domain_info *domain; domain = list_entry(pos, struct tomoyo_domain_info, list); if (domain->is_deleted) @@ -1635,7 +1604,6 @@ static int tomoyo_read_domain_profile(st if (!done) break; } - up_read(&tomoyo_domain_list_lock); head->read_eof = done; return 0; } @@ -1854,16 +1822,24 @@ void tomoyo_load_policy(const char *file printk(KERN_INFO "Mandatory Access Control activated.\n"); tomoyo_policy_loaded = true; { /* Check all profiles currently assigned to domains are defined. */ + const int idx = srcu_read_lock(&tomoyo_ss); struct tomoyo_domain_info *domain; - down_read(&tomoyo_domain_list_lock); - list_for_each_entry(domain, &tomoyo_domain_list, list) { + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { const u8 profile = domain->profile; if (tomoyo_profile_ptr[profile]) continue; panic("Profile %u (used by '%s') not defined.\n", profile, domain->domainname->name); } - up_read(&tomoyo_domain_list_lock); + srcu_read_unlock(&tomoyo_ss, idx); + } + { + struct task_struct *task = + kthread_create(tomoyo_gc_thread, NULL, "GC for TOMOYO"); + if (IS_ERR(task)) + printk(KERN_ERR "GC thread not available.\n"); + else + wake_up_process(task); } } @@ -1997,6 +1973,7 @@ static int tomoyo_open_control(const u8 } } file->private_data = head; + head->tomoyo_srcu_index = srcu_read_lock(&tomoyo_ss); /* * Call the handler now if the file is * /sys/kernel/security/tomoyo/self_domain @@ -2114,6 +2091,7 @@ static int tomoyo_write_control(struct f static int tomoyo_close_control(struct file *file) { struct tomoyo_io_buffer *head = file->private_data; + srcu_read_unlock(&tomoyo_ss, head->tomoyo_srcu_index); /* Release memory used for policy I/O. */ tomoyo_free(head->read_buf); --- security-testing-2.6.git.orig/security/tomoyo/common.h +++ security-testing-2.6.git/security/tomoyo/common.h @@ -156,6 +156,7 @@ struct tomoyo_domain_info { struct list_head acl_info_list; /* Name of this domain. Never NULL. */ const struct tomoyo_path_info *domainname; + atomic_t users; u8 profile; /* Profile number to use. */ bool is_deleted; /* Delete flag. */ bool quota_warned; /* Quota warnning flag. */ @@ -266,6 +267,8 @@ struct tomoyo_io_buffer { int (*write) (struct tomoyo_io_buffer *); /* Exclusive lock for this structure. */ struct mutex io_sem; + /* counter which this structure locked. */ + int tomoyo_srcu_index; /* The position currently reading from. */ struct list_head *read_var1; /* Extra variables for reading. */ @@ -421,10 +424,9 @@ static inline bool tomoyo_is_invalid(con /* The list for "struct tomoyo_domain_info". */ extern struct list_head tomoyo_domain_list; -extern struct rw_semaphore tomoyo_domain_list_lock; -/* Lock for domain->acl_info_list. */ -extern struct rw_semaphore tomoyo_domain_acl_info_list_lock; +/* Lock for modifying policy. */ +extern struct mutex tomoyo_policy_lock; /* Has /sbin/init started? */ extern bool tomoyo_policy_loaded; @@ -433,21 +435,181 @@ extern bool tomoyo_policy_loaded; extern struct tomoyo_domain_info tomoyo_kernel_domain; /** - * list_for_each_cookie - iterate over a list with cookie. + * list_for_each_cookie_rcu - iterate over a list with cookie. * @pos: the &struct list_head to use as a loop cursor. * @cookie: the &struct list_head to use as a cookie. * @head: the head for your list. * - * Same with list_for_each() except that this primitive uses @cookie + * Same with __list_for_each_rcu() except that this primitive uses @cookie * so that we can continue iteration. * @cookie must be NULL when iteration starts, and @cookie will become * NULL when iteration finishes. */ -#define list_for_each_cookie(pos, cookie, head) \ +#define list_for_each_cookie_rcu(pos, cookie, head) \ for (({ if (!cookie) \ - cookie = head; }), \ - pos = (cookie)->next; \ + cookie = head; }), \ + pos = rcu_dereference((cookie)->next); \ prefetch(pos->next), pos != (head) || ((cookie) = NULL); \ - (cookie) = pos, pos = pos->next) + (cookie) = pos, pos = rcu_dereference(pos->next)) + +/* SRCU structure for GC */ +extern struct srcu_struct tomoyo_ss; + +/* + * tomoyo_policy_manager_entry is a structure which is used for holding list of + * domainnames or programs which are permitted to modify configuration via + * /sys/kernel/security/tomoyo/ interface. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_policy_manager_list . + * (2) "manager" is a domainname or a program's pathname. + * (3) "is_domain" is a bool which is true if "manager" is a domainname, false + * otherwise. + * (4) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + */ +struct tomoyo_policy_manager_entry { + struct list_head list; + /* A path to program or a domainname. */ + const struct tomoyo_path_info *manager; + bool is_domain; /* True if manager is a domainname. */ + bool is_deleted; /* True if this entry is deleted. */ +}; + +extern struct list_head tomoyo_policy_manager_list; + +/* + * tomoyo_globally_readable_file_entry is a structure which is used for holding + * "allow_read" entries. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_globally_readable_list . + * (2) "filename" is a pathname which is allowed to open(O_RDONLY). + * (3) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + */ +struct tomoyo_globally_readable_file_entry { + struct list_head list; + const struct tomoyo_path_info *filename; + bool is_deleted; +}; + +extern struct list_head tomoyo_globally_readable_list; + +/* + * tomoyo_pattern_entry is a structure which is used for holding + * "tomoyo_pattern_list" entries. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_pattern_list . + * (2) "pattern" is a pathname pattern which is used for converting pathnames + * to pathname patterns during learning mode. + * (3) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + */ +struct tomoyo_pattern_entry { + struct list_head list; + const struct tomoyo_path_info *pattern; + bool is_deleted; +}; + +extern struct list_head tomoyo_pattern_list; + +/* + * tomoyo_no_rewrite_entry is a structure which is used for holding + * "deny_rewrite" entries. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_no_rewrite_list . + * (2) "pattern" is a pathname which is by default not permitted to modify + * already existing content. + * (3) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + */ +struct tomoyo_no_rewrite_entry { + struct list_head list; + const struct tomoyo_path_info *pattern; + bool is_deleted; +}; + +extern struct list_head tomoyo_no_rewrite_list; + +/* + * tomoyo_domain_initializer_entry is a structure which is used for holding + * "initialize_domain" and "no_initialize_domain" entries. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_domain_initializer_list . + * (2) "domainname" which is "a domainname" or "the last component of a + * domainname". This field is NULL if "from" clause is not specified. + * (3) "program" which is a program's pathname. + * (4) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + * (5) "is_not" is a bool which is true if "no_initialize_domain", false + * otherwise. + * (6) "is_last_name" is a bool which is true if "domainname" is "the last + * component of a domainname", false otherwise. + */ +struct tomoyo_domain_initializer_entry { + struct list_head list; + const struct tomoyo_path_info *domainname; /* This may be NULL */ + const struct tomoyo_path_info *program; + bool is_deleted; + bool is_not; /* True if this entry is "no_initialize_domain". */ + /* True if the domainname is tomoyo_get_last_name(). */ + bool is_last_name; +}; + +extern struct list_head tomoyo_domain_initializer_list; + +/* + * tomoyo_domain_keeper_entry is a structure which is used for holding + * "keep_domain" and "no_keep_domain" entries. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_domain_keeper_list . + * (2) "domainname" which is "a domainname" or "the last component of a + * domainname". + * (3) "program" which is a program's pathname. + * This field is NULL if "from" clause is not specified. + * (4) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + * (5) "is_not" is a bool which is true if "no_initialize_domain", false + * otherwise. + * (6) "is_last_name" is a bool which is true if "domainname" is "the last + * component of a domainname", false otherwise. + */ +struct tomoyo_domain_keeper_entry { + struct list_head list; + const struct tomoyo_path_info *domainname; + const struct tomoyo_path_info *program; /* This may be NULL */ + bool is_deleted; + bool is_not; /* True if this entry is "no_keep_domain". */ + /* True if the domainname is tomoyo_get_last_name(). */ + bool is_last_name; +}; + +extern struct list_head tomoyo_domain_keeper_list; + +/* + * tomoyo_alias_entry is a structure which is used for holding "alias" entries. + * It has following fields. + * + * (1) "list" which is linked to tomoyo_alias_list . + * (2) "original_name" which is a dereferenced pathname. + * (3) "aliased_name" which is a symlink's pathname. + * (4) "is_deleted" is a bool which is true if marked as deleted, false + * otherwise. + */ +struct tomoyo_alias_entry { + struct list_head list; + const struct tomoyo_path_info *original_name; + const struct tomoyo_path_info *aliased_name; + bool is_deleted; +}; + +extern struct list_head tomoyo_alias_list; + +int tomoyo_gc_thread(void *unused); #endif /* !defined(_SECURITY_TOMOYO_COMMON_H) */ --- security-testing-2.6.git.orig/security/tomoyo/domain.c +++ security-testing-2.6.git/security/tomoyo/domain.c @@ -58,77 +58,6 @@ struct tomoyo_domain_info tomoyo_kernel_ * exceptions. */ LIST_HEAD(tomoyo_domain_list); -DECLARE_RWSEM(tomoyo_domain_list_lock); - -/* - * tomoyo_domain_initializer_entry is a structure which is used for holding - * "initialize_domain" and "no_initialize_domain" entries. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_domain_initializer_list . - * (2) "domainname" which is "a domainname" or "the last component of a - * domainname". This field is NULL if "from" clause is not specified. - * (3) "program" which is a program's pathname. - * (4) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - * (5) "is_not" is a bool which is true if "no_initialize_domain", false - * otherwise. - * (6) "is_last_name" is a bool which is true if "domainname" is "the last - * component of a domainname", false otherwise. - */ -struct tomoyo_domain_initializer_entry { - struct list_head list; - const struct tomoyo_path_info *domainname; /* This may be NULL */ - const struct tomoyo_path_info *program; - bool is_deleted; - bool is_not; /* True if this entry is "no_initialize_domain". */ - /* True if the domainname is tomoyo_get_last_name(). */ - bool is_last_name; -}; - -/* - * tomoyo_domain_keeper_entry is a structure which is used for holding - * "keep_domain" and "no_keep_domain" entries. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_domain_keeper_list . - * (2) "domainname" which is "a domainname" or "the last component of a - * domainname". - * (3) "program" which is a program's pathname. - * This field is NULL if "from" clause is not specified. - * (4) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - * (5) "is_not" is a bool which is true if "no_initialize_domain", false - * otherwise. - * (6) "is_last_name" is a bool which is true if "domainname" is "the last - * component of a domainname", false otherwise. - */ -struct tomoyo_domain_keeper_entry { - struct list_head list; - const struct tomoyo_path_info *domainname; - const struct tomoyo_path_info *program; /* This may be NULL */ - bool is_deleted; - bool is_not; /* True if this entry is "no_keep_domain". */ - /* True if the domainname is tomoyo_get_last_name(). */ - bool is_last_name; -}; - -/* - * tomoyo_alias_entry is a structure which is used for holding "alias" entries. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_alias_list . - * (2) "original_name" which is a dereferenced pathname. - * (3) "aliased_name" which is a symlink's pathname. - * (4) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - */ -struct tomoyo_alias_entry { - struct list_head list; - const struct tomoyo_path_info *original_name; - const struct tomoyo_path_info *aliased_name; - bool is_deleted; -}; /** * tomoyo_get_last_name - Get last component of a domainname. @@ -183,8 +112,7 @@ const char *tomoyo_get_last_name(const s * will cause "/usr/sbin/httpd" to belong to "<kernel> /usr/sbin/httpd" domain * unless executed from "<kernel> /etc/rc.d/init.d/httpd" domain. */ -static LIST_HEAD(tomoyo_domain_initializer_list); -static DECLARE_RWSEM(tomoyo_domain_initializer_list_lock); +LIST_HEAD(tomoyo_domain_initializer_list); /** * tomoyo_update_domain_initializer_entry - Update "struct tomoyo_domain_initializer_entry" list. @@ -227,8 +155,8 @@ static int tomoyo_update_domain_initiali } if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_domain_initializer_list_lock); - list_for_each_entry(ptr, &tomoyo_domain_initializer_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_domain_initializer_list, list) { if (ptr->is_not != is_not || ptr->domainname != saved_domainname || ptr->program != saved_program) @@ -244,12 +172,12 @@ static int tomoyo_update_domain_initiali saved_program = NULL; new_entry->is_not = is_not; new_entry->is_last_name = is_last_name; - list_add_tail(&new_entry->list, - &tomoyo_domain_initializer_list); + list_add_tail_rcu(&new_entry->list, + &tomoyo_domain_initializer_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_domain_initializer_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_domainname); tomoyo_put_name(saved_program); kfree(new_entry); @@ -268,15 +196,14 @@ bool tomoyo_read_domain_initializer_poli struct list_head *pos; bool done = true; - down_read(&tomoyo_domain_initializer_list_lock); - list_for_each_cookie(pos, head->read_var2, - &tomoyo_domain_initializer_list) { + list_for_each_cookie_rcu(pos, head->read_var2, + &tomoyo_domain_initializer_list) { const char *no; const char *from = ""; const char *domain = ""; struct tomoyo_domain_initializer_entry *ptr; ptr = list_entry(pos, struct tomoyo_domain_initializer_entry, - list); + list); if (ptr->is_deleted) continue; no = ptr->is_not ? "no_" : ""; @@ -291,7 +218,6 @@ bool tomoyo_read_domain_initializer_poli if (!done) break; } - up_read(&tomoyo_domain_initializer_list_lock); return done; } @@ -328,6 +254,8 @@ int tomoyo_write_domain_initializer_poli * * Returns true if executing @program reinitializes domain transition, * false otherwise. + * + * Caller holds srcu_read_lock(&tomoyo_ss). */ static bool tomoyo_is_domain_initializer(const struct tomoyo_path_info * domainname, @@ -338,8 +266,7 @@ static bool tomoyo_is_domain_initializer struct tomoyo_domain_initializer_entry *ptr; bool flag = false; - down_read(&tomoyo_domain_initializer_list_lock); - list_for_each_entry(ptr, &tomoyo_domain_initializer_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_domain_initializer_list, list) { if (ptr->is_deleted) continue; if (ptr->domainname) { @@ -359,7 +286,6 @@ static bool tomoyo_is_domain_initializer } flag = true; } - up_read(&tomoyo_domain_initializer_list_lock); return flag; } @@ -401,8 +327,7 @@ static bool tomoyo_is_domain_initializer * "<kernel> /usr/sbin/sshd /bin/bash /usr/bin/passwd" domain, unless * explicitly specified by "initialize_domain". */ -static LIST_HEAD(tomoyo_domain_keeper_list); -static DECLARE_RWSEM(tomoyo_domain_keeper_list_lock); +LIST_HEAD(tomoyo_domain_keeper_list); /** * tomoyo_update_domain_keeper_entry - Update "struct tomoyo_domain_keeper_entry" list. @@ -445,8 +370,8 @@ static int tomoyo_update_domain_keeper_e } if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_domain_keeper_list_lock); - list_for_each_entry(ptr, &tomoyo_domain_keeper_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_domain_keeper_list, list) { if (ptr->is_not != is_not || ptr->domainname != saved_domainname || ptr->program != saved_program) @@ -462,11 +387,12 @@ static int tomoyo_update_domain_keeper_e saved_program = NULL; new_entry->is_not = is_not; new_entry->is_last_name = is_last_name; - list_add_tail(&new_entry->list, &tomoyo_domain_keeper_list); + list_add_tail_rcu(&new_entry->list, + &tomoyo_domain_keeper_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_domain_keeper_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_domainname); tomoyo_put_name(saved_program); kfree(new_entry); @@ -506,9 +432,8 @@ bool tomoyo_read_domain_keeper_policy(st struct list_head *pos; bool done = true; - down_read(&tomoyo_domain_keeper_list_lock); - list_for_each_cookie(pos, head->read_var2, - &tomoyo_domain_keeper_list) { + list_for_each_cookie_rcu(pos, head->read_var2, + &tomoyo_domain_keeper_list) { struct tomoyo_domain_keeper_entry *ptr; const char *no; const char *from = ""; @@ -529,7 +454,6 @@ bool tomoyo_read_domain_keeper_policy(st if (!done) break; } - up_read(&tomoyo_domain_keeper_list_lock); return done; } @@ -542,6 +466,8 @@ bool tomoyo_read_domain_keeper_policy(st * * Returns true if executing @program supresses domain transition, * false otherwise. + * + * Caller holds srcu_read_lock(&tomoyo_ss). */ static bool tomoyo_is_domain_keeper(const struct tomoyo_path_info *domainname, const struct tomoyo_path_info *program, @@ -550,8 +476,7 @@ static bool tomoyo_is_domain_keeper(cons struct tomoyo_domain_keeper_entry *ptr; bool flag = false; - down_read(&tomoyo_domain_keeper_list_lock); - list_for_each_entry(ptr, &tomoyo_domain_keeper_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_domain_keeper_list, list) { if (ptr->is_deleted) continue; if (!ptr->is_last_name) { @@ -569,7 +494,6 @@ static bool tomoyo_is_domain_keeper(cons } flag = true; } - up_read(&tomoyo_domain_keeper_list_lock); return flag; } @@ -603,8 +527,7 @@ static bool tomoyo_is_domain_keeper(cons * /bin/busybox and domainname which the current process will belong to after * execve() succeeds is calculated using /bin/cat rather than /bin/busybox . */ -static LIST_HEAD(tomoyo_alias_list); -static DECLARE_RWSEM(tomoyo_alias_list_lock); +LIST_HEAD(tomoyo_alias_list); /** * tomoyo_update_alias_entry - Update "struct tomoyo_alias_entry" list. @@ -637,8 +560,8 @@ static int tomoyo_update_alias_entry(con } if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_alias_list_lock); - list_for_each_entry(ptr, &tomoyo_alias_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_alias_list, list) { if (ptr->original_name != saved_original_name || ptr->aliased_name != saved_aliased_name) continue; @@ -651,11 +574,11 @@ static int tomoyo_update_alias_entry(con saved_original_name = NULL; new_entry->aliased_name = saved_aliased_name; saved_aliased_name = NULL; - list_add_tail(&new_entry->list, &tomoyo_alias_list); + list_add_tail_rcu(&new_entry->list, &tomoyo_alias_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_alias_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_original_name); tomoyo_put_name(saved_aliased_name); kfree(new_entry); @@ -674,8 +597,7 @@ bool tomoyo_read_alias_policy(struct tom struct list_head *pos; bool done = true; - down_read(&tomoyo_alias_list_lock); - list_for_each_cookie(pos, head->read_var2, &tomoyo_alias_list) { + list_for_each_cookie_rcu(pos, head->read_var2, &tomoyo_alias_list) { struct tomoyo_alias_entry *ptr; ptr = list_entry(pos, struct tomoyo_alias_entry, list); @@ -687,7 +609,6 @@ bool tomoyo_read_alias_policy(struct tom if (!done) break; } - up_read(&tomoyo_alias_list_lock); return done; } @@ -731,52 +652,18 @@ struct tomoyo_domain_info *tomoyo_find_o if (!saved_domainname) return NULL; new_domain = kmalloc(sizeof(*new_domain), GFP_KERNEL); - down_write(&tomoyo_domain_list_lock); + mutex_lock(&tomoyo_policy_lock); domain = tomoyo_find_domain(domainname); - if (domain) - goto out; - /* Can I reuse memory of deleted domain? */ - list_for_each_entry(domain, &tomoyo_domain_list, list) { - struct task_struct *p; - struct tomoyo_acl_info *ptr; - bool flag; - if (!domain->is_deleted || - domain->domainname != saved_domainname) - continue; - flag = false; - read_lock(&tasklist_lock); - for_each_process(p) { - if (tomoyo_real_domain(p) != domain) - continue; - flag = true; - break; - } - read_unlock(&tasklist_lock); - if (flag) - continue; - list_for_each_entry(ptr, &domain->acl_info_list, list) { - ptr->type |= TOMOYO_ACL_DELETED; - } - domain->ignore_global_allow_read = false; - domain->domain_transition_failed = false; - domain->profile = profile; - domain->quota_warned = false; - mb(); /* Avoid out-of-order execution. */ - domain->is_deleted = false; - goto out; - } - /* No memory reusable. Create using new memory. */ - if (tomoyo_memory_ok(new_domain)) { + if (!domain && tomoyo_memory_ok(new_domain)) { domain = new_domain; new_domain = NULL; INIT_LIST_HEAD(&domain->acl_info_list); domain->domainname = saved_domainname; saved_domainname = NULL; domain->profile = profile; - list_add_tail(&domain->list, &tomoyo_domain_list); + list_add_tail_rcu(&domain->list, &tomoyo_domain_list); } - out: - up_write(&tomoyo_domain_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_domainname); kfree(new_domain); return domain; @@ -788,6 +675,8 @@ struct tomoyo_domain_info *tomoyo_find_o * @bprm: Pointer to "struct linux_binprm". * * Returns 0 on success, negative value otherwise. + * + * Caller holds srcu_read_lock(&tomoyo_ss). */ int tomoyo_find_next_domain(struct linux_binprm *bprm) { @@ -810,6 +699,7 @@ int tomoyo_find_next_domain(struct linux struct tomoyo_path_info s; /* symlink name */ struct tomoyo_path_info l; /* last name */ static bool initialized; + const int idx = srcu_read_lock(&tomoyo_ss); if (!tmp) goto out; @@ -848,8 +738,7 @@ int tomoyo_find_next_domain(struct linux if (tomoyo_pathcmp(&r, &s)) { struct tomoyo_alias_entry *ptr; /* Is this program allowed to be called via symbolic links? */ - down_read(&tomoyo_alias_list_lock); - list_for_each_entry(ptr, &tomoyo_alias_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_alias_list, list) { if (ptr->is_deleted || tomoyo_pathcmp(&r, ptr->original_name) || tomoyo_pathcmp(&s, ptr->aliased_name)) @@ -860,7 +749,6 @@ int tomoyo_find_next_domain(struct linux tomoyo_fill_path_info(&r); break; } - up_read(&tomoyo_alias_list_lock); } /* Check execute permission. */ @@ -891,9 +779,7 @@ int tomoyo_find_next_domain(struct linux } if (domain || strlen(new_domain_name) >= TOMOYO_MAX_PATHNAME_LEN) goto done; - down_read(&tomoyo_domain_list_lock); domain = tomoyo_find_domain(new_domain_name); - up_read(&tomoyo_domain_list_lock); if (domain) goto done; if (is_enforce) @@ -910,9 +796,12 @@ int tomoyo_find_next_domain(struct linux else old_domain->domain_transition_failed = true; out: + BUG_ON(bprm->cred->security); if (!domain) domain = old_domain; + atomic_inc(&domain->users); bprm->cred->security = domain; + srcu_read_unlock(&tomoyo_ss, idx); tomoyo_free(real_program_name); tomoyo_free(symlink_program_name); tomoyo_free(tmp); --- security-testing-2.6.git.orig/security/tomoyo/file.c +++ security-testing-2.6.git/security/tomoyo/file.c @@ -14,56 +14,6 @@ #include "realpath.h" #define ACC_MODE(x) ("\000\004\002\006"[(x)&O_ACCMODE]) -/* - * tomoyo_globally_readable_file_entry is a structure which is used for holding - * "allow_read" entries. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_globally_readable_list . - * (2) "filename" is a pathname which is allowed to open(O_RDONLY). - * (3) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - */ -struct tomoyo_globally_readable_file_entry { - struct list_head list; - const struct tomoyo_path_info *filename; - bool is_deleted; -}; - -/* - * tomoyo_pattern_entry is a structure which is used for holding - * "tomoyo_pattern_list" entries. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_pattern_list . - * (2) "pattern" is a pathname pattern which is used for converting pathnames - * to pathname patterns during learning mode. - * (3) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - */ -struct tomoyo_pattern_entry { - struct list_head list; - const struct tomoyo_path_info *pattern; - bool is_deleted; -}; - -/* - * tomoyo_no_rewrite_entry is a structure which is used for holding - * "deny_rewrite" entries. - * It has following fields. - * - * (1) "list" which is linked to tomoyo_no_rewrite_list . - * (2) "pattern" is a pathname which is by default not permitted to modify - * already existing content. - * (3) "is_deleted" is a bool which is true if marked as deleted, false - * otherwise. - */ -struct tomoyo_no_rewrite_entry { - struct list_head list; - const struct tomoyo_path_info *pattern; - bool is_deleted; -}; - /* Keyword array for single path operations. */ static const char *tomoyo_sp_keyword[TOMOYO_MAX_SINGLE_PATH_OPERATION] = { [TOMOYO_TYPE_READ_WRITE_ACL] = "read/write", @@ -159,8 +109,8 @@ static struct tomoyo_path_info *tomoyo_g return NULL; } -/* Lock for domain->acl_info_list. */ -DECLARE_RWSEM(tomoyo_domain_acl_info_list_lock); +/* Lock for modifying TOMOYO's policy. */ +DEFINE_MUTEX(tomoyo_policy_lock); static int tomoyo_update_double_path_acl(const u8 type, const char *filename1, const char *filename2, @@ -195,8 +145,7 @@ static int tomoyo_update_single_path_acl * given "allow_read /lib/libc-2.5.so" to the domain which current process * belongs to. */ -static LIST_HEAD(tomoyo_globally_readable_list); -static DECLARE_RWSEM(tomoyo_globally_readable_list_lock); +LIST_HEAD(tomoyo_globally_readable_list); /** * tomoyo_update_globally_readable_entry - Update "struct tomoyo_globally_readable_file_entry" list. @@ -221,8 +170,8 @@ static int tomoyo_update_globally_readab return -ENOMEM; if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_globally_readable_list_lock); - list_for_each_entry(ptr, &tomoyo_globally_readable_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_globally_readable_list, list) { if (ptr->filename != saved_filename) continue; ptr->is_deleted = is_delete; @@ -232,11 +181,12 @@ static int tomoyo_update_globally_readab if (!is_delete && error && tomoyo_memory_ok(new_entry)) { new_entry->filename = saved_filename; saved_filename = NULL; - list_add_tail(&new_entry->list, &tomoyo_globally_readable_list); + list_add_tail_rcu(&new_entry->list, + &tomoyo_globally_readable_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_globally_readable_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_filename); kfree(new_entry); return error; @@ -248,21 +198,21 @@ static int tomoyo_update_globally_readab * @filename: The filename to check. * * Returns true if any domain can open @filename for reading, false otherwise. + * + * Caller holds srcu_read_lock(&tomoyo_ss). */ static bool tomoyo_is_globally_readable_file(const struct tomoyo_path_info * filename) { struct tomoyo_globally_readable_file_entry *ptr; bool found = false; - down_read(&tomoyo_globally_readable_list_lock); - list_for_each_entry(ptr, &tomoyo_globally_readable_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_globally_readable_list, list) { if (!ptr->is_deleted && tomoyo_path_matches_pattern(filename, ptr->filename)) { found = true; break; } } - up_read(&tomoyo_globally_readable_list_lock); return found; } @@ -291,9 +241,8 @@ bool tomoyo_read_globally_readable_polic struct list_head *pos; bool done = true; - down_read(&tomoyo_globally_readable_list_lock); - list_for_each_cookie(pos, head->read_var2, - &tomoyo_globally_readable_list) { + list_for_each_cookie_rcu(pos, head->read_var2, + &tomoyo_globally_readable_list) { struct tomoyo_globally_readable_file_entry *ptr; ptr = list_entry(pos, struct tomoyo_globally_readable_file_entry, @@ -305,7 +254,6 @@ bool tomoyo_read_globally_readable_polic if (!done) break; } - up_read(&tomoyo_globally_readable_list_lock); return done; } @@ -338,8 +286,7 @@ bool tomoyo_read_globally_readable_polic * which pretends as if /proc/self/ is not a symlink; so that we can forbid * current process from accessing other process's information. */ -static LIST_HEAD(tomoyo_pattern_list); -static DECLARE_RWSEM(tomoyo_pattern_list_lock); +LIST_HEAD(tomoyo_pattern_list); /** * tomoyo_update_file_pattern_entry - Update "struct tomoyo_pattern_entry" list. @@ -364,8 +311,8 @@ static int tomoyo_update_file_pattern_en return -ENOMEM; if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_pattern_list_lock); - list_for_each_entry(ptr, &tomoyo_pattern_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_pattern_list, list) { if (saved_pattern != ptr->pattern) continue; ptr->is_deleted = is_delete; @@ -375,11 +322,11 @@ static int tomoyo_update_file_pattern_en if (!is_delete && error && tomoyo_memory_ok(new_entry)) { new_entry->pattern = saved_pattern; saved_pattern = NULL; - list_add_tail(&new_entry->list, &tomoyo_pattern_list); + list_add_tail_rcu(&new_entry->list, &tomoyo_pattern_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_pattern_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_pattern); kfree(new_entry); return error; @@ -391,6 +338,8 @@ static int tomoyo_update_file_pattern_en * @filename: The filename to find patterned pathname. * * Returns pointer to pathname pattern if matched, @filename otherwise. + * + * Caller holds srcu_read_lock(&tomoyo_ss). */ static const struct tomoyo_path_info * tomoyo_get_file_pattern(const struct tomoyo_path_info *filename) @@ -398,8 +347,7 @@ tomoyo_get_file_pattern(const struct tom struct tomoyo_pattern_entry *ptr; const struct tomoyo_path_info *pattern = NULL; - down_read(&tomoyo_pattern_list_lock); - list_for_each_entry(ptr, &tomoyo_pattern_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_pattern_list, list) { if (ptr->is_deleted) continue; if (!tomoyo_path_matches_pattern(filename, ptr->pattern)) @@ -412,7 +360,6 @@ tomoyo_get_file_pattern(const struct tom break; } } - up_read(&tomoyo_pattern_list_lock); if (pattern) filename = pattern; return filename; @@ -443,8 +390,7 @@ bool tomoyo_read_file_pattern(struct tom struct list_head *pos; bool done = true; - down_read(&tomoyo_pattern_list_lock); - list_for_each_cookie(pos, head->read_var2, &tomoyo_pattern_list) { + list_for_each_cookie_rcu(pos, head->read_var2, &tomoyo_pattern_list) { struct tomoyo_pattern_entry *ptr; ptr = list_entry(pos, struct tomoyo_pattern_entry, list); if (ptr->is_deleted) @@ -454,7 +400,6 @@ bool tomoyo_read_file_pattern(struct tom if (!done) break; } - up_read(&tomoyo_pattern_list_lock); return done; } @@ -487,8 +432,7 @@ bool tomoyo_read_file_pattern(struct tom * " (deleted)" suffix if the file is already unlink()ed; so that we don't * need to worry whether the file is already unlink()ed or not. */ -static LIST_HEAD(tomoyo_no_rewrite_list); -static DECLARE_RWSEM(tomoyo_no_rewrite_list_lock); +LIST_HEAD(tomoyo_no_rewrite_list); /** * tomoyo_update_no_rewrite_entry - Update "struct tomoyo_no_rewrite_entry" list. @@ -513,8 +457,8 @@ static int tomoyo_update_no_rewrite_entr return -ENOMEM; if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_no_rewrite_list_lock); - list_for_each_entry(ptr, &tomoyo_no_rewrite_list, list) { + mutex_lock(&tomoyo_policy_lock); + list_for_each_entry_rcu(ptr, &tomoyo_no_rewrite_list, list) { if (ptr->pattern != saved_pattern) continue; ptr->is_deleted = is_delete; @@ -524,11 +468,11 @@ static int tomoyo_update_no_rewrite_entr if (!is_delete && error && tomoyo_memory_ok(new_entry)) { new_entry->pattern = saved_pattern; saved_pattern = NULL; - list_add_tail(&new_entry->list, &tomoyo_no_rewrite_list); + list_add_tail_rcu(&new_entry->list, &tomoyo_no_rewrite_list); new_entry = NULL; error = 0; } - up_write(&tomoyo_no_rewrite_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_pattern); return error; } @@ -540,14 +484,15 @@ static int tomoyo_update_no_rewrite_entr * * Returns true if @filename is specified by "deny_rewrite" directive, * false otherwise. + * + * Caller holds srcu_read_lock(&tomoyo_ss). */ static bool tomoyo_is_no_rewrite_file(const struct tomoyo_path_info *filename) { struct tomoyo_no_rewrite_entry *ptr; bool found = false; - down_read(&tomoyo_no_rewrite_list_lock); - list_for_each_entry(ptr, &tomoyo_no_rewrite_list, list) { + list_for_each_entry_rcu(ptr, &tomoyo_no_rewrite_list, list) { if (ptr->is_deleted) continue; if (!tomoyo_path_matches_pattern(filename, ptr->pattern)) @@ -555,7 +500,6 @@ static bool tomoyo_is_no_rewrite_file(co found = true; break; } - up_read(&tomoyo_no_rewrite_list_lock); return found; } @@ -584,8 +528,8 @@ bool tomoyo_read_no_rewrite_policy(struc struct list_head *pos; bool done = true; - down_read(&tomoyo_no_rewrite_list_lock); - list_for_each_cookie(pos, head->read_var2, &tomoyo_no_rewrite_list) { + list_for_each_cookie_rcu(pos, head->read_var2, + &tomoyo_no_rewrite_list) { struct tomoyo_no_rewrite_entry *ptr; ptr = list_entry(pos, struct tomoyo_no_rewrite_entry, list); if (ptr->is_deleted) @@ -595,7 +539,6 @@ bool tomoyo_read_no_rewrite_policy(struc if (!done) break; } - up_read(&tomoyo_no_rewrite_list_lock); return done; } @@ -660,9 +603,9 @@ static int tomoyo_check_single_path_acl2 { struct tomoyo_acl_info *ptr; int error = -EPERM; + const int idx = srcu_read_lock(&tomoyo_ss); - down_read(&tomoyo_domain_acl_info_list_lock); - list_for_each_entry(ptr, &domain->acl_info_list, list) { + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { struct tomoyo_single_path_acl_record *acl; if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_SINGLE_PATH_ACL) continue; @@ -680,7 +623,7 @@ static int tomoyo_check_single_path_acl2 error = 0; break; } - up_read(&tomoyo_domain_acl_info_list_lock); + srcu_read_unlock(&tomoyo_ss, idx); return error; } @@ -846,10 +789,10 @@ static int tomoyo_update_single_path_acl return -ENOMEM; if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_domain_acl_info_list_lock); + mutex_lock(&tomoyo_policy_lock); if (is_delete) goto delete; - list_for_each_entry(ptr, &domain->acl_info_list, list) { + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { struct tomoyo_single_path_acl_record *acl; if (tomoyo_acl_type1(ptr) != TOMOYO_TYPE_SINGLE_PATH_ACL) continue; @@ -877,13 +820,14 @@ static int tomoyo_update_single_path_acl new_entry->perm |= rw_mask; new_entry->filename = saved_filename; saved_filename = NULL; - list_add_tail(&new_entry->head.list, &domain->acl_info_list); + list_add_tail_rcu(&new_entry->head.list, + &domain->acl_info_list); new_entry = NULL; error = 0; } goto out; delete: - list_for_each_entry(ptr, &domain->acl_info_list, list) { + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { struct tomoyo_single_path_acl_record *acl; if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_SINGLE_PATH_ACL) continue; @@ -902,7 +846,7 @@ static int tomoyo_update_single_path_acl break; } out: - up_write(&tomoyo_domain_acl_info_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_filename); kfree(new_entry); return error; @@ -945,10 +889,10 @@ static int tomoyo_update_double_path_acl } if (!is_delete) new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); - down_write(&tomoyo_domain_acl_info_list_lock); + mutex_lock(&tomoyo_policy_lock); if (is_delete) goto delete; - list_for_each_entry(ptr, &domain->acl_info_list, list) { + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { struct tomoyo_double_path_acl_record *acl; if (tomoyo_acl_type1(ptr) != TOMOYO_TYPE_DOUBLE_PATH_ACL) continue; @@ -973,13 +917,14 @@ static int tomoyo_update_double_path_acl saved_filename1 = NULL; new_entry->filename2 = saved_filename2; saved_filename2 = NULL; - list_add_tail(&new_entry->head.list, &domain->acl_info_list); + list_add_tail_rcu(&new_entry->head.list, + &domain->acl_info_list); new_entry = NULL; error = 0; } goto out; delete: - list_for_each_entry(ptr, &domain->acl_info_list, list) { + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { struct tomoyo_double_path_acl_record *acl; if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_DOUBLE_PATH_ACL) continue; @@ -995,7 +940,7 @@ static int tomoyo_update_double_path_acl break; } out: - up_write(&tomoyo_domain_acl_info_list_lock); + mutex_unlock(&tomoyo_policy_lock); tomoyo_put_name(saved_filename1); tomoyo_put_name(saved_filename2); kfree(new_entry); @@ -1040,11 +985,12 @@ static int tomoyo_check_double_path_acl( struct tomoyo_acl_info *ptr; const u8 perm = 1 << type; int error = -EPERM; + int idx; if (!tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE)) return 0; - down_read(&tomoyo_domain_acl_info_list_lock); - list_for_each_entry(ptr, &domain->acl_info_list, list) { + idx = srcu_read_lock(&tomoyo_ss); + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { struct tomoyo_double_path_acl_record *acl; if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_DOUBLE_PATH_ACL) continue; @@ -1059,7 +1005,7 @@ static int tomoyo_check_double_path_acl( error = 0; break; } - up_read(&tomoyo_domain_acl_info_list_lock); + srcu_read_unlock(&tomoyo_ss, idx); return error; } @@ -1169,6 +1115,7 @@ int tomoyo_check_open_permission(struct struct tomoyo_path_info *buf; const u8 mode = tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE); const bool is_enforce = (mode == 3); + int idx; if (!mode || !path->mnt) return 0; @@ -1184,6 +1131,7 @@ int tomoyo_check_open_permission(struct if (!buf) goto out; error = 0; + idx = srcu_read_lock(&tomoyo_ss); /* * If the filename is specified by "deny_rewrite" keyword, * we need to check "allow_rewrite" permission when the filename is not @@ -1203,6 +1151,7 @@ int tomoyo_check_open_permission(struct error = tomoyo_check_single_path_permission2(domain, TOMOYO_TYPE_TRUNCATE_ACL, buf, mode); + srcu_read_unlock(&tomoyo_ss, idx); out: tomoyo_free(buf); if (!is_enforce) @@ -1226,6 +1175,7 @@ int tomoyo_check_1path_perm(struct tomoy struct tomoyo_path_info *buf; const u8 mode = tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE); const bool is_enforce = (mode == 3); + int idx; if (!mode || !path->mnt) return 0; @@ -1243,8 +1193,10 @@ int tomoyo_check_1path_perm(struct tomoy tomoyo_fill_path_info(buf); } } + idx = srcu_read_lock(&tomoyo_ss); error = tomoyo_check_single_path_permission2(domain, operation, buf, mode); + srcu_read_unlock(&tomoyo_ss, idx); out: tomoyo_free(buf); if (!is_enforce) @@ -1267,19 +1219,23 @@ int tomoyo_check_rewrite_permission(stru const u8 mode = tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE); const bool is_enforce = (mode == 3); struct tomoyo_path_info *buf; + int idx; if (!mode || !filp->f_path.mnt) return 0; buf = tomoyo_get_path(&filp->f_path); if (!buf) goto out; + idx = srcu_read_lock(&tomoyo_ss); if (!tomoyo_is_no_rewrite_file(buf)) { error = 0; - goto out; + goto ok; } error = tomoyo_check_single_path_permission2(domain, TOMOYO_TYPE_REWRITE_ACL, buf, mode); + ok: + srcu_read_unlock(&tomoyo_ss, idx); out: tomoyo_free(buf); if (!is_enforce) @@ -1306,6 +1262,7 @@ int tomoyo_check_2path_perm(struct tomoy const u8 mode = tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE); const bool is_enforce = (mode == 3); const char *msg; + int idx; if (!mode || !path1->mnt || !path2->mnt) return 0; @@ -1329,10 +1286,11 @@ int tomoyo_check_2path_perm(struct tomoy } } } + idx = srcu_read_lock(&tomoyo_ss); error = tomoyo_check_double_path_acl(domain, operation, buf1, buf2); msg = tomoyo_dp2keyword(operation); if (!error) - goto out; + goto ok; if (tomoyo_verbose_mode(domain)) printk(KERN_WARNING "TOMOYO-%s: Access '%s %s %s' " "denied for %s\n", tomoyo_get_msg(is_enforce), @@ -1344,6 +1302,8 @@ int tomoyo_check_2path_perm(struct tomoy tomoyo_update_double_path_acl(operation, name1, name2, domain, false); } + ok: + srcu_read_unlock(&tomoyo_ss, idx); out: tomoyo_free(buf1); tomoyo_free(buf2); --- security-testing-2.6.git.orig/security/tomoyo/realpath.c +++ security-testing-2.6.git/security/tomoyo/realpath.c @@ -1,3 +1,4 @@ + /* * security/tomoyo/realpath.c * @@ -15,6 +16,9 @@ #include <linux/fs_struct.h> #include "common.h" #include "realpath.h" +#include "tomoyo.h" + +struct srcu_struct tomoyo_ss; /** * tomoyo_encode: Convert binary string to ascii string. @@ -223,6 +227,17 @@ bool tomoyo_memory_ok(void *ptr) return false; } +/** + * tomoyo_free_element - Free memory for elements. + * + * @ptr: Pointer to allocated memory. + */ +static void tomoyo_free_element(void *ptr) +{ + atomic_sub(ksize(ptr), &tomoyo_allocated_memory_for_elements); + kfree(ptr); +} + /* Memory allocated for string data in bytes. */ static atomic_t tomoyo_allocated_memory_for_savename; /* Quota for holding string data in bytes. */ @@ -238,15 +253,10 @@ static unsigned int tomoyo_quota_for_sav /* * tomoyo_name_entry is a structure which is used for linking * "struct tomoyo_path_info" into tomoyo_name_list . - * - * Since tomoyo_name_list manages a list of strings which are shared by - * multiple processes (whereas "struct tomoyo_path_info" inside - * "struct tomoyo_path_info_with_data" is not shared), a reference counter will - * be added to "struct tomoyo_name_entry" rather than "struct tomoyo_path_info" - * when TOMOYO starts supporting garbage collector. */ struct tomoyo_name_entry { struct list_head list; + atomic_t users; struct tomoyo_path_info entry; }; @@ -287,10 +297,11 @@ const struct tomoyo_path_info *tomoyo_ge entry = kmalloc(sizeof(*entry) + len, GFP_KERNEL); allocated_len = entry ? ksize(entry) : 0; mutex_lock(&tomoyo_name_list_lock); - list_for_each_entry(ptr, &tomoyo_name_list[hash % TOMOYO_MAX_HASH], - list) { + list_for_each_entry_rcu(ptr, &tomoyo_name_list[hash % TOMOYO_MAX_HASH], + list) { if (hash != ptr->entry.hash || strcmp(name, ptr->entry.name)) continue; + atomic_inc(&ptr->users); error = 0; break; } @@ -305,8 +316,9 @@ const struct tomoyo_path_info *tomoyo_ge ptr->entry.name = ((char *) ptr) + sizeof(*ptr); memmove((char *) ptr->entry.name, name, len); tomoyo_fill_path_info(&ptr->entry); - list_add_tail(&ptr->list, - &tomoyo_name_list[hash % TOMOYO_MAX_HASH]); + atomic_set(&ptr->users, 1); + list_add_tail_rcu(&ptr->list, + &tomoyo_name_list[hash % TOMOYO_MAX_HASH]); entry = NULL; error = 0; } @@ -321,6 +333,31 @@ const struct tomoyo_path_info *tomoyo_ge } /** + * tomoyo_put_name - Delete shared memory for string data. + * + * @ptr: Pointer to "struct tomoyo_path_info". + */ +void tomoyo_put_name(const struct tomoyo_path_info *name) +{ + struct tomoyo_name_entry *ptr; + bool can_delete = false; + + if (!name) + return; + ptr = container_of(name, struct tomoyo_name_entry, entry); + mutex_lock(&tomoyo_name_list_lock); + if (atomic_dec_and_test(&ptr->users)) { + list_del(&ptr->list); + can_delete = true; + } + mutex_unlock(&tomoyo_name_list_lock); + if (can_delete) { + atomic_sub(ksize(ptr), &tomoyo_allocated_memory_for_savename); + kfree(ptr); + } +} + +/** * tomoyo_realpath_init - Initialize realpath related code. */ void __init tomoyo_realpath_init(void) @@ -331,12 +368,14 @@ void __init tomoyo_realpath_init(void) for (i = 0; i < TOMOYO_MAX_HASH; i++) INIT_LIST_HEAD(&tomoyo_name_list[i]); INIT_LIST_HEAD(&tomoyo_kernel_domain.acl_info_list); + if (init_srcu_struct(&tomoyo_ss)) + panic("Can't initialize tomoyo_ss"); tomoyo_kernel_domain.domainname = tomoyo_get_name(TOMOYO_ROOT_NAME); - list_add_tail(&tomoyo_kernel_domain.list, &tomoyo_domain_list); - down_read(&tomoyo_domain_list_lock); + list_add_tail_rcu(&tomoyo_kernel_domain.list, &tomoyo_domain_list); + i = srcu_read_lock(&tomoyo_ss); if (tomoyo_find_domain(TOMOYO_ROOT_NAME) != &tomoyo_kernel_domain) panic("Can't register tomoyo_kernel_domain"); - up_read(&tomoyo_domain_list_lock); + srcu_read_unlock(&tomoyo_ss, i); } /* Memory allocated for temporary purpose. */ @@ -431,3 +470,311 @@ int tomoyo_write_memory_quota(struct tom tomoyo_quota_for_elements = size; return 0; } + +/* Garbage collecter functions */ + +static inline void tomoyo_gc_del_domain_initializer +(struct tomoyo_domain_initializer_entry *ptr) +{ + tomoyo_put_name(ptr->domainname); + tomoyo_put_name(ptr->program); +} + +static inline void tomoyo_gc_del_domain_keeper +(struct tomoyo_domain_keeper_entry *ptr) +{ + tomoyo_put_name(ptr->domainname); + tomoyo_put_name(ptr->program); +} + +static inline void tomoyo_gc_del_alias(struct tomoyo_alias_entry *ptr) +{ + tomoyo_put_name(ptr->original_name); + tomoyo_put_name(ptr->aliased_name); +} + +static inline void tomoyo_gc_del_readable +(struct tomoyo_globally_readable_file_entry *ptr) +{ + tomoyo_put_name(ptr->filename); +} + +static inline void tomoyo_gc_del_pattern(struct tomoyo_pattern_entry *ptr) +{ + tomoyo_put_name(ptr->pattern); +} + +static inline void tomoyo_gc_del_no_rewrite +(struct tomoyo_no_rewrite_entry *ptr) +{ + tomoyo_put_name(ptr->pattern); +} + +static inline void tomoyo_gc_del_manager +(struct tomoyo_policy_manager_entry *ptr) +{ + tomoyo_put_name(ptr->manager); +} + +static void tomoyo_gc_del_acl(struct tomoyo_acl_info *acl) +{ + switch (tomoyo_acl_type1(acl)) { + struct tomoyo_single_path_acl_record *acl1; + struct tomoyo_double_path_acl_record *acl2; + case TOMOYO_TYPE_SINGLE_PATH_ACL: + acl1 = container_of(acl, struct tomoyo_single_path_acl_record, + head); + tomoyo_put_name(acl1->filename); + break; + case TOMOYO_TYPE_DOUBLE_PATH_ACL: + acl2 = container_of(acl, struct tomoyo_double_path_acl_record, + head); + tomoyo_put_name(acl2->filename1); + tomoyo_put_name(acl2->filename2); + break; + } +} + +static bool tomoyo_gc_del_domain(struct tomoyo_domain_info *domain) +{ + struct tomoyo_acl_info *acl; + struct tomoyo_acl_info *tmp; + /* + * We need to recheck domain->users because + * tomoyo_find_next_domain() increments it. + */ + if (atomic_read(&domain->users)) + return false; + /* Delete all entries in this domain. */ + list_for_each_entry_safe(acl, tmp, &domain->acl_info_list, list) { + list_del_rcu(&acl->list); + tomoyo_gc_del_acl(acl); + tomoyo_free_element(acl); + } + tomoyo_put_name(domain->domainname); + return true; +} + +enum tomoyo_gc_id { + TOMOYO_ID_DOMAIN_INITIALIZER, + TOMOYO_ID_DOMAIN_KEEPER, + TOMOYO_ID_ALIAS, + TOMOYO_ID_GLOBALLY_READABLE, + TOMOYO_ID_PATTERN, + TOMOYO_ID_NO_REWRITE, + TOMOYO_ID_MANAGER, + TOMOYO_ID_ACL, + TOMOYO_ID_DOMAIN +}; + +struct tomoyo_gc_entry { + struct list_head list; + int type; + void *element; +}; + + +/* Caller holds tomoyo_policy_lock mutex. */ +static bool tomoyo_add_to_gc(const int type, void *element, + struct list_head *head) +{ + struct tomoyo_gc_entry *entry = kmalloc(sizeof(*entry), GFP_ATOMIC); + if (!entry) + return false; + entry->type = type; + entry->element = element; + list_add(&entry->list, head); + return true; +} + +/** + * tomoyo_gc_thread_main - Garbage collector thread for TOMOYO. + * + * @unused: Not used. + * + * This function is exclusively executed. + */ +static int tomoyo_gc_thread_main(void *unused) +{ + static DEFINE_MUTEX(tomoyo_gc_mutex); + static LIST_HEAD(tomoyo_gc_queue); + if (!mutex_trylock(&tomoyo_gc_mutex)) + return 0; + + mutex_lock(&tomoyo_policy_lock); + { + struct tomoyo_globally_readable_file_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_globally_readable_list, + list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_GLOBALLY_READABLE, ptr, + &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_pattern_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_pattern_list, list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_PATTERN, ptr, + &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_no_rewrite_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_no_rewrite_list, list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_NO_REWRITE, ptr, + &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_domain_initializer_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_domain_initializer_list, + list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_DOMAIN_INITIALIZER, + ptr, &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_domain_keeper_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_domain_keeper_list, + list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_DOMAIN_KEEPER, ptr, + &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_alias_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_alias_list, list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_ALIAS, ptr, + &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_policy_manager_entry *ptr; + list_for_each_entry_rcu(ptr, &tomoyo_policy_manager_list, + list) { + if (!ptr->is_deleted) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_MANAGER, ptr, + &tomoyo_gc_queue)) + list_del_rcu(&ptr->list); + else + break; + } + } + { + struct tomoyo_domain_info *domain; + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { + struct tomoyo_acl_info *acl; + list_for_each_entry_rcu(acl, &domain->acl_info_list, + list) { + if (!(acl->type & TOMOYO_ACL_DELETED)) + continue; + if (tomoyo_add_to_gc(TOMOYO_ID_ACL, acl, + &tomoyo_gc_queue)) + list_del_rcu(&acl->list); + else + break; + } + if (domain->is_deleted && + !atomic_read(&domain->users)) { + if (tomoyo_add_to_gc(TOMOYO_ID_DOMAIN, domain, + &tomoyo_gc_queue)) + list_del_rcu(&domain->list); + else + break; + } + } + } + mutex_unlock(&tomoyo_policy_lock); + if (list_empty(&tomoyo_gc_queue)) + goto done; + synchronize_srcu(&tomoyo_ss); + { + struct tomoyo_gc_entry *p; + struct tomoyo_gc_entry *tmp; + list_for_each_entry_safe(p, tmp, &tomoyo_gc_queue, list) { + switch (p->type) { + case TOMOYO_ID_DOMAIN_INITIALIZER: + tomoyo_gc_del_domain_initializer(p->element); + break; + case TOMOYO_ID_DOMAIN_KEEPER: + tomoyo_gc_del_domain_keeper(p->element); + break; + case TOMOYO_ID_ALIAS: + tomoyo_gc_del_alias(p->element); + break; + case TOMOYO_ID_GLOBALLY_READABLE: + tomoyo_gc_del_readable(p->element); + break; + case TOMOYO_ID_PATTERN: + tomoyo_gc_del_pattern(p->element); + break; + case TOMOYO_ID_NO_REWRITE: + tomoyo_gc_del_no_rewrite(p->element); + break; + case TOMOYO_ID_MANAGER: + tomoyo_gc_del_manager(p->element); + break; + case TOMOYO_ID_ACL: + tomoyo_gc_del_acl(p->element); + break; + case TOMOYO_ID_DOMAIN: + if (!tomoyo_gc_del_domain(p->element)) + continue; + break; + } + tomoyo_free_element(p->element); + list_del(&p->list); + kfree(p); + } + } + done: + mutex_unlock(&tomoyo_gc_mutex); + return 0; +} + +/** + * tomoyo_gc_thread - Garbage collector thread for TOMOYO. + * + * @unused: Not used. + */ +int tomoyo_gc_thread(void *unused) +{ + /* + * Maybe this thread should be created and terminated as needed + * rather than created upon boot and living forever... + */ + while (1) { + msleep(30000); + tomoyo_gc_thread_main(unused); + } +} --- security-testing-2.6.git.orig/security/tomoyo/realpath.h +++ security-testing-2.6.git/security/tomoyo/realpath.h @@ -44,10 +44,7 @@ bool tomoyo_memory_ok(void *ptr); * The RAM is shared, so NEVER try to modify or kfree() the returned name. */ const struct tomoyo_path_info *tomoyo_get_name(const char *name); -static inline void tomoyo_put_name(const struct tomoyo_path_info *name) -{ - /* It's a dummy so far. */ -} +void tomoyo_put_name(const struct tomoyo_path_info *name); /* Allocate memory for temporary use (e.g. permission checks). */ void *tomoyo_alloc(const size_t size); --- security-testing-2.6.git.orig/security/tomoyo/tomoyo.c +++ security-testing-2.6.git/security/tomoyo/tomoyo.c @@ -22,9 +22,19 @@ static int tomoyo_cred_prepare(struct cr * we don't need to duplicate. */ new->security = old->security; + if (new->security) + atomic_inc(&((struct tomoyo_domain_info *) + new->security)->users); return 0; } +static void tomoyo_cred_free(struct cred *cred) +{ + struct tomoyo_domain_info *domain = cred->security; + if (domain) + atomic_dec(&domain->users); +} + static int tomoyo_bprm_set_creds(struct linux_binprm *bprm) { int rc; @@ -49,7 +59,11 @@ static int tomoyo_bprm_set_creds(struct * Tell tomoyo_bprm_check_security() is called for the first time of an * execve operation. */ - bprm->cred->security = NULL; + if (bprm->cred->security) { + atomic_dec(&((struct tomoyo_domain_info *) + bprm->cred->security)->users); + bprm->cred->security = NULL; + } return 0; } @@ -263,6 +277,7 @@ static int tomoyo_dentry_open(struct fil static struct security_operations tomoyo_security_ops = { .name = "tomoyo", .cred_prepare = tomoyo_cred_prepare, + .cred_free = tomoyo_cred_free, .bprm_set_creds = tomoyo_bprm_set_creds, .bprm_check_security = tomoyo_bprm_check_security, #ifdef CONFIG_SYSCTL @@ -291,6 +306,7 @@ static int __init tomoyo_init(void) panic("Failure registering TOMOYO Linux"); printk(KERN_INFO "TOMOYO Linux initialized\n"); cred->security = &tomoyo_kernel_domain; + atomic_inc(&tomoyo_kernel_domain.users); tomoyo_realpath_init(); return 0; } ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 3/3] TOMOYO: Add SRCU based garbage collector. 2009-06-18 6:45 ` [PATCH 3/3] TOMOYO: Add SRCU based garbage collector Tetsuo Handa @ 2009-06-18 16:05 ` Paul E. McKenney 0 siblings, 0 replies; 16+ messages in thread From: Paul E. McKenney @ 2009-06-18 16:05 UTC (permalink / raw) To: Tetsuo Handa; +Cc: linux-security-module, linux-kernel On Thu, Jun 18, 2009 at 03:45:59PM +0900, Tetsuo Handa wrote: > Tetsuo Handa wrote: > > I have one worry regarding SRCU. > > Inside synchronize_srcu(), there is a loop > > > > while (srcu_readers_active_idx(sp, idx)) > > schedule_timeout_interruptible(1); > > > > but the reader's sleeping duration varies from less than one second to > > more than hours. > > > > Checking for counters for every jiffies sounds too much waste of CPU. > > Delaying kfree() for seconds or minutes won't cause troubles for TOMOYO. > > It would be nice if checking interval is configurable like > > "schedule_timeout_interruptible(sp->timeout);". > > > Well, GC thread's schedule_timeout_interruptible(1); loop does not > appear on /usr/bin/top , thus I don't need to worry about checking interval. OK, that does make things easier. ;-) I won't bother with set_srcu_timeout(), then. > OK. Here is SRCU version. > ------------------------------ > Subject: [PATCH 3/3] TOMOYO: Add SRCU based garbage collector. > > As of now, TOMOYO cannot release memory used by marked-as-deleted list elements > because TOMOYO does not know how many readers are there. > > This patch adds SRCU based garbage collector. I was able to make it through about the first 16% before reality intruded. Assuming I can trust the various "Caller holds srcu_read_lock(&tomoyo_ss)" comments, that part of the code looks good. Thanx, Paul > Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> > --- > security/tomoyo/common.c | 124 ++++++-------- > security/tomoyo/common.h | 180 ++++++++++++++++++++- > security/tomoyo/domain.c | 191 ++++------------------- > security/tomoyo/file.c | 174 ++++++++------------ > security/tomoyo/realpath.c | 373 +++++++++++++++++++++++++++++++++++++++++++-- > security/tomoyo/realpath.h | 5 > security/tomoyo/tomoyo.c | 18 ++ > 7 files changed, 707 insertions(+), 358 deletions(-) > > --- security-testing-2.6.git.orig/security/tomoyo/common.c > +++ security-testing-2.6.git/security/tomoyo/common.c > @@ -12,6 +12,7 @@ > #include <linux/uaccess.h> > #include <linux/security.h> > #include <linux/hardirq.h> > +#include <linux/kthread.h> > #include "realpath.h" > #include "common.h" > #include "tomoyo.h" > @@ -340,10 +341,9 @@ bool tomoyo_is_domain_def(const unsigned > * > * @domainname: The domainname to find. > * > - * Caller must call down_read(&tomoyo_domain_list_lock); or > - * down_write(&tomoyo_domain_list_lock); . > - * > * Returns pointer to "struct tomoyo_domain_info" if found, NULL otherwise. > + * > + * Caller holds srcu_read_lock(&tomoyo_ss). > */ > struct tomoyo_domain_info *tomoyo_find_domain(const char *domainname) > { > @@ -352,7 +352,7 @@ struct tomoyo_domain_info *tomoyo_find_d > > name.name = domainname; > tomoyo_fill_path_info(&name); > - list_for_each_entry(domain, &tomoyo_domain_list, list) { > + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { > if (!domain->is_deleted && > !tomoyo_pathcmp(&name, domain->domainname)) > return domain; > @@ -788,6 +788,8 @@ bool tomoyo_verbose_mode(const struct to > * @domain: Pointer to "struct tomoyo_domain_info". > * > * Returns true if the domain is not exceeded quota, false otherwise. > + * > + * Caller holds srcu_read_lock(&tomoyo_ss). > */ > bool tomoyo_domain_quota_is_ok(struct tomoyo_domain_info * const domain) > { > @@ -796,8 +798,7 @@ bool tomoyo_domain_quota_is_ok(struct to > > if (!domain) > return true; > - down_read(&tomoyo_domain_acl_info_list_lock); > - list_for_each_entry(ptr, &domain->acl_info_list, list) { > + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { > if (ptr->type & TOMOYO_ACL_DELETED) > continue; > switch (tomoyo_acl_type2(ptr)) { > @@ -850,7 +851,6 @@ bool tomoyo_domain_quota_is_ok(struct to > break; > } > } > - up_read(&tomoyo_domain_acl_info_list_lock); > if (count < tomoyo_check_flags(domain, TOMOYO_MAX_ACCEPT_ENTRY)) > return true; > if (!domain->quota_warned) { > @@ -1029,27 +1029,6 @@ static int tomoyo_read_profile(struct to > } > > /* > - * tomoyo_policy_manager_entry is a structure which is used for holding list of > - * domainnames or programs which are permitted to modify configuration via > - * /sys/kernel/security/tomoyo/ interface. > - * It has following fields. > - * > - * (1) "list" which is linked to tomoyo_policy_manager_list . > - * (2) "manager" is a domainname or a program's pathname. > - * (3) "is_domain" is a bool which is true if "manager" is a domainname, false > - * otherwise. > - * (4) "is_deleted" is a bool which is true if marked as deleted, false > - * otherwise. > - */ > -struct tomoyo_policy_manager_entry { > - struct list_head list; > - /* A path to program or a domainname. */ > - const struct tomoyo_path_info *manager; > - bool is_domain; /* True if manager is a domainname. */ > - bool is_deleted; /* True if this entry is deleted. */ > -}; > - > -/* > * tomoyo_policy_manager_list is used for holding list of domainnames or > * programs which are permitted to modify configuration via > * /sys/kernel/security/tomoyo/ interface. > @@ -1079,8 +1058,7 @@ struct tomoyo_policy_manager_entry { > * > * # cat /sys/kernel/security/tomoyo/manager > */ > -static LIST_HEAD(tomoyo_policy_manager_list); > -static DECLARE_RWSEM(tomoyo_policy_manager_list_lock); > +LIST_HEAD(tomoyo_policy_manager_list); > > /** > * tomoyo_update_manager_entry - Add a manager entry. > @@ -1112,8 +1090,8 @@ static int tomoyo_update_manager_entry(c > return -ENOMEM; > if (!is_delete) > new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); > - down_write(&tomoyo_policy_manager_list_lock); > - list_for_each_entry(ptr, &tomoyo_policy_manager_list, list) { > + mutex_lock(&tomoyo_policy_lock); > + list_for_each_entry_rcu(ptr, &tomoyo_policy_manager_list, list) { > if (ptr->manager != saved_manager) > continue; > ptr->is_deleted = is_delete; > @@ -1124,11 +1102,12 @@ static int tomoyo_update_manager_entry(c > new_entry->manager = saved_manager; > saved_manager = NULL; > new_entry->is_domain = is_domain; > - list_add_tail(&new_entry->list, &tomoyo_policy_manager_list); > + list_add_tail_rcu(&new_entry->list, > + &tomoyo_policy_manager_list); > new_entry = NULL; > error = 0; > } > - up_write(&tomoyo_policy_manager_list_lock); > + mutex_unlock(&tomoyo_policy_lock); > tomoyo_put_name(saved_manager); > kfree(new_entry); > return error; > @@ -1167,9 +1146,8 @@ static int tomoyo_read_manager_policy(st > > if (head->read_eof) > return 0; > - down_read(&tomoyo_policy_manager_list_lock); > - list_for_each_cookie(pos, head->read_var2, > - &tomoyo_policy_manager_list) { > + list_for_each_cookie_rcu(pos, head->read_var2, > + &tomoyo_policy_manager_list) { > struct tomoyo_policy_manager_entry *ptr; > ptr = list_entry(pos, struct tomoyo_policy_manager_entry, > list); > @@ -1179,7 +1157,6 @@ static int tomoyo_read_manager_policy(st > if (!done) > break; > } > - up_read(&tomoyo_policy_manager_list_lock); > head->read_eof = done; > return 0; > } > @@ -1189,6 +1166,8 @@ static int tomoyo_read_manager_policy(st > * > * Returns true if the current process is permitted to modify policy > * via /sys/kernel/security/tomoyo/ interface. > + * > + * Caller holds srcu_read_lock(&tomoyo_ss). > */ > static bool tomoyo_is_policy_manager(void) > { > @@ -1202,29 +1181,25 @@ static bool tomoyo_is_policy_manager(voi > return true; > if (!tomoyo_manage_by_non_root && (task->cred->uid || task->cred->euid)) > return false; > - down_read(&tomoyo_policy_manager_list_lock); > - list_for_each_entry(ptr, &tomoyo_policy_manager_list, list) { > + list_for_each_entry_rcu(ptr, &tomoyo_policy_manager_list, list) { > if (!ptr->is_deleted && ptr->is_domain > && !tomoyo_pathcmp(domainname, ptr->manager)) { > found = true; > break; > } > } > - up_read(&tomoyo_policy_manager_list_lock); > if (found) > return true; > exe = tomoyo_get_exe(); > if (!exe) > return false; > - down_read(&tomoyo_policy_manager_list_lock); > - list_for_each_entry(ptr, &tomoyo_policy_manager_list, list) { > + list_for_each_entry_rcu(ptr, &tomoyo_policy_manager_list, list) { > if (!ptr->is_deleted && !ptr->is_domain > && !strcmp(exe, ptr->manager->name)) { > found = true; > break; > } > } > - up_read(&tomoyo_policy_manager_list_lock); > if (!found) { /* Reduce error messages. */ > static pid_t last_pid; > const pid_t pid = current->pid; > @@ -1245,6 +1220,8 @@ static bool tomoyo_is_policy_manager(voi > * @data: String to parse. > * > * Returns true on success, false otherwise. > + * > + * Caller holds srcu_read_lock(&tomoyo_ss). > */ > static bool tomoyo_is_select_one(struct tomoyo_io_buffer *head, > const char *data) > @@ -1260,11 +1237,8 @@ static bool tomoyo_is_select_one(struct > domain = tomoyo_real_domain(p); > read_unlock(&tasklist_lock); > } else if (!strncmp(data, "domain=", 7)) { > - if (tomoyo_is_domain_def(data + 7)) { > - down_read(&tomoyo_domain_list_lock); > + if (tomoyo_is_domain_def(data + 7)) > domain = tomoyo_find_domain(data + 7); > - up_read(&tomoyo_domain_list_lock); > - } > } else > return false; > head->write_var1 = domain; > @@ -1278,13 +1252,11 @@ static bool tomoyo_is_select_one(struct > if (domain) { > struct tomoyo_domain_info *d; > head->read_var1 = NULL; > - down_read(&tomoyo_domain_list_lock); > - list_for_each_entry(d, &tomoyo_domain_list, list) { > + list_for_each_entry_rcu(d, &tomoyo_domain_list, list) { > if (d == domain) > break; > head->read_var1 = &d->list; > } > - up_read(&tomoyo_domain_list_lock); > head->read_var2 = NULL; > head->read_bit = 0; > head->read_step = 0; > @@ -1300,6 +1272,8 @@ static bool tomoyo_is_select_one(struct > * @domainname: The name of domain. > * > * Returns 0. > + * > + * Caller holds srcu_read_lock(&tomoyo_ss). > */ > static int tomoyo_delete_domain(char *domainname) > { > @@ -1308,9 +1282,9 @@ static int tomoyo_delete_domain(char *do > > name.name = domainname; > tomoyo_fill_path_info(&name); > - down_write(&tomoyo_domain_list_lock); > + mutex_lock(&tomoyo_policy_lock); > /* Is there an active domain? */ > - list_for_each_entry(domain, &tomoyo_domain_list, list) { > + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { > /* Never delete tomoyo_kernel_domain */ > if (domain == &tomoyo_kernel_domain) > continue; > @@ -1320,7 +1294,7 @@ static int tomoyo_delete_domain(char *do > domain->is_deleted = true; > break; > } > - up_write(&tomoyo_domain_list_lock); > + mutex_unlock(&tomoyo_policy_lock); > return 0; > } > > @@ -1330,6 +1304,8 @@ static int tomoyo_delete_domain(char *do > * @head: Pointer to "struct tomoyo_io_buffer". > * > * Returns 0 on success, negative value otherwise. > + * > + * Caller holds srcu_read_lock(&tomoyo_ss). > */ > static int tomoyo_write_domain_policy(struct tomoyo_io_buffer *head) > { > @@ -1352,11 +1328,9 @@ static int tomoyo_write_domain_policy(st > domain = NULL; > if (is_delete) > tomoyo_delete_domain(data); > - else if (is_select) { > - down_read(&tomoyo_domain_list_lock); > + else if (is_select) > domain = tomoyo_find_domain(data); > - up_read(&tomoyo_domain_list_lock); > - } else > + else > domain = tomoyo_find_or_assign_new_domain(data, 0); > head->write_var1 = domain; > return 0; > @@ -1511,8 +1485,7 @@ static int tomoyo_read_domain_policy(str > return 0; > if (head->read_step == 0) > head->read_step = 1; > - down_read(&tomoyo_domain_list_lock); > - list_for_each_cookie(dpos, head->read_var1, &tomoyo_domain_list) { > + list_for_each_cookie_rcu(dpos, head->read_var1, &tomoyo_domain_list) { > struct tomoyo_domain_info *domain; > const char *quota_exceeded = ""; > const char *transition_failed = ""; > @@ -1543,9 +1516,8 @@ acl_loop: > if (head->read_step == 3) > goto tail_mark; > /* Print ACL entries in the domain. */ > - down_read(&tomoyo_domain_acl_info_list_lock); > - list_for_each_cookie(apos, head->read_var2, > - &domain->acl_info_list) { > + list_for_each_cookie_rcu(apos, head->read_var2, > + &domain->acl_info_list) { > struct tomoyo_acl_info *ptr > = list_entry(apos, struct tomoyo_acl_info, > list); > @@ -1553,7 +1525,6 @@ acl_loop: > if (!done) > break; > } > - up_read(&tomoyo_domain_acl_info_list_lock); > if (!done) > break; > head->read_step = 3; > @@ -1565,7 +1536,6 @@ tail_mark: > if (head->read_single_domain) > break; > } > - up_read(&tomoyo_domain_list_lock); > head->read_eof = done; > return 0; > } > @@ -1581,6 +1551,8 @@ tail_mark: > * > * ( echo "select " $domainname; echo "use_profile " $profile ) | > * /usr/lib/ccs/loadpolicy -d > + * > + * Caller holds srcu_read_lock(&tomoyo_ss). > */ > static int tomoyo_write_domain_profile(struct tomoyo_io_buffer *head) > { > @@ -1592,9 +1564,7 @@ static int tomoyo_write_domain_profile(s > if (!cp) > return -EINVAL; > *cp = '\0'; > - down_read(&tomoyo_domain_list_lock); > domain = tomoyo_find_domain(cp + 1); > - up_read(&tomoyo_domain_list_lock); > if (strict_strtoul(data, 10, &profile)) > return -EINVAL; > if (domain && profile < TOMOYO_MAX_PROFILES > @@ -1624,8 +1594,7 @@ static int tomoyo_read_domain_profile(st > > if (head->read_eof) > return 0; > - down_read(&tomoyo_domain_list_lock); > - list_for_each_cookie(pos, head->read_var1, &tomoyo_domain_list) { > + list_for_each_cookie_rcu(pos, head->read_var1, &tomoyo_domain_list) { > struct tomoyo_domain_info *domain; > domain = list_entry(pos, struct tomoyo_domain_info, list); > if (domain->is_deleted) > @@ -1635,7 +1604,6 @@ static int tomoyo_read_domain_profile(st > if (!done) > break; > } > - up_read(&tomoyo_domain_list_lock); > head->read_eof = done; > return 0; > } > @@ -1854,16 +1822,24 @@ void tomoyo_load_policy(const char *file > printk(KERN_INFO "Mandatory Access Control activated.\n"); > tomoyo_policy_loaded = true; > { /* Check all profiles currently assigned to domains are defined. */ > + const int idx = srcu_read_lock(&tomoyo_ss); > struct tomoyo_domain_info *domain; > - down_read(&tomoyo_domain_list_lock); > - list_for_each_entry(domain, &tomoyo_domain_list, list) { > + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { > const u8 profile = domain->profile; > if (tomoyo_profile_ptr[profile]) > continue; > panic("Profile %u (used by '%s') not defined.\n", > profile, domain->domainname->name); > } > - up_read(&tomoyo_domain_list_lock); > + srcu_read_unlock(&tomoyo_ss, idx); > + } > + { > + struct task_struct *task = > + kthread_create(tomoyo_gc_thread, NULL, "GC for TOMOYO"); > + if (IS_ERR(task)) > + printk(KERN_ERR "GC thread not available.\n"); > + else > + wake_up_process(task); > } > } > > @@ -1997,6 +1973,7 @@ static int tomoyo_open_control(const u8 > } > } > file->private_data = head; > + head->tomoyo_srcu_index = srcu_read_lock(&tomoyo_ss); > /* > * Call the handler now if the file is > * /sys/kernel/security/tomoyo/self_domain > @@ -2114,6 +2091,7 @@ static int tomoyo_write_control(struct f > static int tomoyo_close_control(struct file *file) > { > struct tomoyo_io_buffer *head = file->private_data; > + srcu_read_unlock(&tomoyo_ss, head->tomoyo_srcu_index); > > /* Release memory used for policy I/O. */ > tomoyo_free(head->read_buf); > --- security-testing-2.6.git.orig/security/tomoyo/common.h > +++ security-testing-2.6.git/security/tomoyo/common.h > @@ -156,6 +156,7 @@ struct tomoyo_domain_info { > struct list_head acl_info_list; > /* Name of this domain. Never NULL. */ > const struct tomoyo_path_info *domainname; > + atomic_t users; > u8 profile; /* Profile number to use. */ > bool is_deleted; /* Delete flag. */ > bool quota_warned; /* Quota warnning flag. */ > @@ -266,6 +267,8 @@ struct tomoyo_io_buffer { > int (*write) (struct tomoyo_io_buffer *); > /* Exclusive lock for this structure. */ > struct mutex io_sem; > + /* counter which this structure locked. */ > + int tomoyo_srcu_index; > /* The position currently reading from. */ > struct list_head *read_var1; > /* Extra variables for reading. */ > @@ -421,10 +424,9 @@ static inline bool tomoyo_is_invalid(con > > /* The list for "struct tomoyo_domain_info". */ > extern struct list_head tomoyo_domain_list; > -extern struct rw_semaphore tomoyo_domain_list_lock; > > -/* Lock for domain->acl_info_list. */ > -extern struct rw_semaphore tomoyo_domain_acl_info_list_lock; > +/* Lock for modifying policy. */ > +extern struct mutex tomoyo_policy_lock; > > /* Has /sbin/init started? */ > extern bool tomoyo_policy_loaded; > @@ -433,21 +435,181 @@ extern bool tomoyo_policy_loaded; > extern struct tomoyo_domain_info tomoyo_kernel_domain; > > /** > - * list_for_each_cookie - iterate over a list with cookie. > + * list_for_each_cookie_rcu - iterate over a list with cookie. > * @pos: the &struct list_head to use as a loop cursor. > * @cookie: the &struct list_head to use as a cookie. > * @head: the head for your list. > * > - * Same with list_for_each() except that this primitive uses @cookie > + * Same with __list_for_each_rcu() except that this primitive uses @cookie > * so that we can continue iteration. > * @cookie must be NULL when iteration starts, and @cookie will become > * NULL when iteration finishes. > */ > -#define list_for_each_cookie(pos, cookie, head) \ > +#define list_for_each_cookie_rcu(pos, cookie, head) \ > for (({ if (!cookie) \ > - cookie = head; }), \ > - pos = (cookie)->next; \ > + cookie = head; }), \ > + pos = rcu_dereference((cookie)->next); \ > prefetch(pos->next), pos != (head) || ((cookie) = NULL); \ > - (cookie) = pos, pos = pos->next) > + (cookie) = pos, pos = rcu_dereference(pos->next)) > + > +/* SRCU structure for GC */ > +extern struct srcu_struct tomoyo_ss; > + > +/* > + * tomoyo_policy_manager_entry is a structure which is used for holding list of > + * domainnames or programs which are permitted to modify configuration via > + * /sys/kernel/security/tomoyo/ interface. > + * It has following fields. > + * > + * (1) "list" which is linked to tomoyo_policy_manager_list . > + * (2) "manager" is a domainname or a program's pathname. > + * (3) "is_domain" is a bool which is true if "manager" is a domainname, false > + * otherwise. > + * (4) "is_deleted" is a bool which is true if marked as deleted, false > + * otherwise. > + */ > +struct tomoyo_policy_manager_entry { > + struct list_head list; > + /* A path to program or a domainname. */ > + const struct tomoyo_path_info *manager; > + bool is_domain; /* True if manager is a domainname. */ > + bool is_deleted; /* True if this entry is deleted. */ > +}; > + > +extern struct list_head tomoyo_policy_manager_list; > + > +/* > + * tomoyo_globally_readable_file_entry is a structure which is used for holding > + * "allow_read" entries. > + * It has following fields. > + * > + * (1) "list" which is linked to tomoyo_globally_readable_list . > + * (2) "filename" is a pathname which is allowed to open(O_RDONLY). > + * (3) "is_deleted" is a bool which is true if marked as deleted, false > + * otherwise. > + */ > +struct tomoyo_globally_readable_file_entry { > + struct list_head list; > + const struct tomoyo_path_info *filename; > + bool is_deleted; > +}; > + > +extern struct list_head tomoyo_globally_readable_list; > + > +/* > + * tomoyo_pattern_entry is a structure which is used for holding > + * "tomoyo_pattern_list" entries. > + * It has following fields. > + * > + * (1) "list" which is linked to tomoyo_pattern_list . > + * (2) "pattern" is a pathname pattern which is used for converting pathnames > + * to pathname patterns during learning mode. > + * (3) "is_deleted" is a bool which is true if marked as deleted, false > + * otherwise. > + */ > +struct tomoyo_pattern_entry { > + struct list_head list; > + const struct tomoyo_path_info *pattern; > + bool is_deleted; > +}; > + > +extern struct list_head tomoyo_pattern_list; > + > +/* > + * tomoyo_no_rewrite_entry is a structure which is used for holding > + * "deny_rewrite" entries. > + * It has following fields. > + * > + * (1) "list" which is linked to tomoyo_no_rewrite_list . > + * (2) "pattern" is a pathname which is by default not permitted to modify > + * already existing content. > + * (3) "is_deleted" is a bool which is true if marked as deleted, false > + * otherwise. > + */ > +struct tomoyo_no_rewrite_entry { > + struct list_head list; > + const struct tomoyo_path_info *pattern; > + bool is_deleted; > +}; > + > +extern struct list_head tomoyo_no_rewrite_list; > + > +/* > + * tomoyo_domain_initializer_entry is a structure which is used for holding > + * "initialize_domain" and "no_initialize_domain" entries. > + * It has following fields. > + * > + * (1) "list" which is linked to tomoyo_domain_initializer_list . > + * (2) "domainname" which is "a domainname" or "the last component of a > + * domainname". This field is NULL if "from" clause is not specified. > + * (3) "program" which is a program's pathname. > + * (4) "is_deleted" is a bool which is true if marked as deleted, false > + * otherwise. > + * (5) "is_not" is a bool which is true if "no_initialize_domain", false > + * otherwise. > + * (6) "is_last_name" is a bool which is true if "domainname" is "the last > + * component of a domainname", false otherwise. > + */ > +struct tomoyo_domain_initializer_entry { > + struct list_head list; > + const struct tomoyo_path_info *domainname; /* This may be NULL */ > + const struct tomoyo_path_info *program; > + bool is_deleted; > + bool is_not; /* True if this entry is "no_initialize_domain". */ > + /* True if the domainname is tomoyo_get_last_name(). */ > + bool is_last_name; > +}; > + > +extern struct list_head tomoyo_domain_initializer_list; > + > +/* > + * tomoyo_domain_keeper_entry is a structure which is used for holding > + * "keep_domain" and "no_keep_domain" entries. > + * It has following fields. > + * > + * (1) "list" which is linked to tomoyo_domain_keeper_list . > + * (2) "domainname" which is "a domainname" or "the last component of a > + * domainname". > + * (3) "program" which is a program's pathname. > + * This field is NULL if "from" clause is not specified. > + * (4) "is_deleted" is a bool which is true if marked as deleted, false > + * otherwise. > + * (5) "is_not" is a bool which is true if "no_initialize_domain", false > + * otherwise. > + * (6) "is_last_name" is a bool which is true if "domainname" is "the last > + * component of a domainname", false otherwise. > + */ > +struct tomoyo_domain_keeper_entry { > + struct list_head list; > + const struct tomoyo_path_info *domainname; > + const struct tomoyo_path_info *program; /* This may be NULL */ > + bool is_deleted; > + bool is_not; /* True if this entry is "no_keep_domain". */ > + /* True if the domainname is tomoyo_get_last_name(). */ > + bool is_last_name; > +}; > + > +extern struct list_head tomoyo_domain_keeper_list; > + > +/* > + * tomoyo_alias_entry is a structure which is used for holding "alias" entries. > + * It has following fields. > + * > + * (1) "list" which is linked to tomoyo_alias_list . > + * (2) "original_name" which is a dereferenced pathname. > + * (3) "aliased_name" which is a symlink's pathname. > + * (4) "is_deleted" is a bool which is true if marked as deleted, false > + * otherwise. > + */ > +struct tomoyo_alias_entry { > + struct list_head list; > + const struct tomoyo_path_info *original_name; > + const struct tomoyo_path_info *aliased_name; > + bool is_deleted; > +}; > + > +extern struct list_head tomoyo_alias_list; > + > +int tomoyo_gc_thread(void *unused); > > #endif /* !defined(_SECURITY_TOMOYO_COMMON_H) */ > --- security-testing-2.6.git.orig/security/tomoyo/domain.c > +++ security-testing-2.6.git/security/tomoyo/domain.c > @@ -58,77 +58,6 @@ struct tomoyo_domain_info tomoyo_kernel_ > * exceptions. > */ > LIST_HEAD(tomoyo_domain_list); > -DECLARE_RWSEM(tomoyo_domain_list_lock); > - > -/* > - * tomoyo_domain_initializer_entry is a structure which is used for holding > - * "initialize_domain" and "no_initialize_domain" entries. > - * It has following fields. > - * > - * (1) "list" which is linked to tomoyo_domain_initializer_list . > - * (2) "domainname" which is "a domainname" or "the last component of a > - * domainname". This field is NULL if "from" clause is not specified. > - * (3) "program" which is a program's pathname. > - * (4) "is_deleted" is a bool which is true if marked as deleted, false > - * otherwise. > - * (5) "is_not" is a bool which is true if "no_initialize_domain", false > - * otherwise. > - * (6) "is_last_name" is a bool which is true if "domainname" is "the last > - * component of a domainname", false otherwise. > - */ > -struct tomoyo_domain_initializer_entry { > - struct list_head list; > - const struct tomoyo_path_info *domainname; /* This may be NULL */ > - const struct tomoyo_path_info *program; > - bool is_deleted; > - bool is_not; /* True if this entry is "no_initialize_domain". */ > - /* True if the domainname is tomoyo_get_last_name(). */ > - bool is_last_name; > -}; > - > -/* > - * tomoyo_domain_keeper_entry is a structure which is used for holding > - * "keep_domain" and "no_keep_domain" entries. > - * It has following fields. > - * > - * (1) "list" which is linked to tomoyo_domain_keeper_list . > - * (2) "domainname" which is "a domainname" or "the last component of a > - * domainname". > - * (3) "program" which is a program's pathname. > - * This field is NULL if "from" clause is not specified. > - * (4) "is_deleted" is a bool which is true if marked as deleted, false > - * otherwise. > - * (5) "is_not" is a bool which is true if "no_initialize_domain", false > - * otherwise. > - * (6) "is_last_name" is a bool which is true if "domainname" is "the last > - * component of a domainname", false otherwise. > - */ > -struct tomoyo_domain_keeper_entry { > - struct list_head list; > - const struct tomoyo_path_info *domainname; > - const struct tomoyo_path_info *program; /* This may be NULL */ > - bool is_deleted; > - bool is_not; /* True if this entry is "no_keep_domain". */ > - /* True if the domainname is tomoyo_get_last_name(). */ > - bool is_last_name; > -}; > - > -/* > - * tomoyo_alias_entry is a structure which is used for holding "alias" entries. > - * It has following fields. > - * > - * (1) "list" which is linked to tomoyo_alias_list . > - * (2) "original_name" which is a dereferenced pathname. > - * (3) "aliased_name" which is a symlink's pathname. > - * (4) "is_deleted" is a bool which is true if marked as deleted, false > - * otherwise. > - */ > -struct tomoyo_alias_entry { > - struct list_head list; > - const struct tomoyo_path_info *original_name; > - const struct tomoyo_path_info *aliased_name; > - bool is_deleted; > -}; > > /** > * tomoyo_get_last_name - Get last component of a domainname. > @@ -183,8 +112,7 @@ const char *tomoyo_get_last_name(const s > * will cause "/usr/sbin/httpd" to belong to "<kernel> /usr/sbin/httpd" domain > * unless executed from "<kernel> /etc/rc.d/init.d/httpd" domain. > */ > -static LIST_HEAD(tomoyo_domain_initializer_list); > -static DECLARE_RWSEM(tomoyo_domain_initializer_list_lock); > +LIST_HEAD(tomoyo_domain_initializer_list); > > /** > * tomoyo_update_domain_initializer_entry - Update "struct tomoyo_domain_initializer_entry" list. > @@ -227,8 +155,8 @@ static int tomoyo_update_domain_initiali > } > if (!is_delete) > new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); > - down_write(&tomoyo_domain_initializer_list_lock); > - list_for_each_entry(ptr, &tomoyo_domain_initializer_list, list) { > + mutex_lock(&tomoyo_policy_lock); > + list_for_each_entry_rcu(ptr, &tomoyo_domain_initializer_list, list) { > if (ptr->is_not != is_not || > ptr->domainname != saved_domainname || > ptr->program != saved_program) > @@ -244,12 +172,12 @@ static int tomoyo_update_domain_initiali > saved_program = NULL; > new_entry->is_not = is_not; > new_entry->is_last_name = is_last_name; > - list_add_tail(&new_entry->list, > - &tomoyo_domain_initializer_list); > + list_add_tail_rcu(&new_entry->list, > + &tomoyo_domain_initializer_list); > new_entry = NULL; > error = 0; > } > - up_write(&tomoyo_domain_initializer_list_lock); > + mutex_unlock(&tomoyo_policy_lock); > tomoyo_put_name(saved_domainname); > tomoyo_put_name(saved_program); > kfree(new_entry); > @@ -268,15 +196,14 @@ bool tomoyo_read_domain_initializer_poli > struct list_head *pos; > bool done = true; > > - down_read(&tomoyo_domain_initializer_list_lock); > - list_for_each_cookie(pos, head->read_var2, > - &tomoyo_domain_initializer_list) { > + list_for_each_cookie_rcu(pos, head->read_var2, > + &tomoyo_domain_initializer_list) { > const char *no; > const char *from = ""; > const char *domain = ""; > struct tomoyo_domain_initializer_entry *ptr; > ptr = list_entry(pos, struct tomoyo_domain_initializer_entry, > - list); > + list); > if (ptr->is_deleted) > continue; > no = ptr->is_not ? "no_" : ""; > @@ -291,7 +218,6 @@ bool tomoyo_read_domain_initializer_poli > if (!done) > break; > } > - up_read(&tomoyo_domain_initializer_list_lock); > return done; > } > > @@ -328,6 +254,8 @@ int tomoyo_write_domain_initializer_poli > * > * Returns true if executing @program reinitializes domain transition, > * false otherwise. > + * > + * Caller holds srcu_read_lock(&tomoyo_ss). > */ > static bool tomoyo_is_domain_initializer(const struct tomoyo_path_info * > domainname, > @@ -338,8 +266,7 @@ static bool tomoyo_is_domain_initializer > struct tomoyo_domain_initializer_entry *ptr; > bool flag = false; > > - down_read(&tomoyo_domain_initializer_list_lock); > - list_for_each_entry(ptr, &tomoyo_domain_initializer_list, list) { > + list_for_each_entry_rcu(ptr, &tomoyo_domain_initializer_list, list) { > if (ptr->is_deleted) > continue; > if (ptr->domainname) { > @@ -359,7 +286,6 @@ static bool tomoyo_is_domain_initializer > } > flag = true; > } > - up_read(&tomoyo_domain_initializer_list_lock); > return flag; > } > > @@ -401,8 +327,7 @@ static bool tomoyo_is_domain_initializer > * "<kernel> /usr/sbin/sshd /bin/bash /usr/bin/passwd" domain, unless > * explicitly specified by "initialize_domain". > */ > -static LIST_HEAD(tomoyo_domain_keeper_list); > -static DECLARE_RWSEM(tomoyo_domain_keeper_list_lock); > +LIST_HEAD(tomoyo_domain_keeper_list); > > /** > * tomoyo_update_domain_keeper_entry - Update "struct tomoyo_domain_keeper_entry" list. > @@ -445,8 +370,8 @@ static int tomoyo_update_domain_keeper_e > } > if (!is_delete) > new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); > - down_write(&tomoyo_domain_keeper_list_lock); > - list_for_each_entry(ptr, &tomoyo_domain_keeper_list, list) { > + mutex_lock(&tomoyo_policy_lock); > + list_for_each_entry_rcu(ptr, &tomoyo_domain_keeper_list, list) { > if (ptr->is_not != is_not || > ptr->domainname != saved_domainname || > ptr->program != saved_program) > @@ -462,11 +387,12 @@ static int tomoyo_update_domain_keeper_e > saved_program = NULL; > new_entry->is_not = is_not; > new_entry->is_last_name = is_last_name; > - list_add_tail(&new_entry->list, &tomoyo_domain_keeper_list); > + list_add_tail_rcu(&new_entry->list, > + &tomoyo_domain_keeper_list); > new_entry = NULL; > error = 0; > } > - up_write(&tomoyo_domain_keeper_list_lock); > + mutex_unlock(&tomoyo_policy_lock); > tomoyo_put_name(saved_domainname); > tomoyo_put_name(saved_program); > kfree(new_entry); > @@ -506,9 +432,8 @@ bool tomoyo_read_domain_keeper_policy(st > struct list_head *pos; > bool done = true; > > - down_read(&tomoyo_domain_keeper_list_lock); > - list_for_each_cookie(pos, head->read_var2, > - &tomoyo_domain_keeper_list) { > + list_for_each_cookie_rcu(pos, head->read_var2, > + &tomoyo_domain_keeper_list) { > struct tomoyo_domain_keeper_entry *ptr; > const char *no; > const char *from = ""; > @@ -529,7 +454,6 @@ bool tomoyo_read_domain_keeper_policy(st > if (!done) > break; > } > - up_read(&tomoyo_domain_keeper_list_lock); > return done; > } > > @@ -542,6 +466,8 @@ bool tomoyo_read_domain_keeper_policy(st > * > * Returns true if executing @program supresses domain transition, > * false otherwise. > + * > + * Caller holds srcu_read_lock(&tomoyo_ss). > */ > static bool tomoyo_is_domain_keeper(const struct tomoyo_path_info *domainname, > const struct tomoyo_path_info *program, > @@ -550,8 +476,7 @@ static bool tomoyo_is_domain_keeper(cons > struct tomoyo_domain_keeper_entry *ptr; > bool flag = false; > > - down_read(&tomoyo_domain_keeper_list_lock); > - list_for_each_entry(ptr, &tomoyo_domain_keeper_list, list) { > + list_for_each_entry_rcu(ptr, &tomoyo_domain_keeper_list, list) { > if (ptr->is_deleted) > continue; > if (!ptr->is_last_name) { > @@ -569,7 +494,6 @@ static bool tomoyo_is_domain_keeper(cons > } > flag = true; > } > - up_read(&tomoyo_domain_keeper_list_lock); > return flag; > } > > @@ -603,8 +527,7 @@ static bool tomoyo_is_domain_keeper(cons > * /bin/busybox and domainname which the current process will belong to after > * execve() succeeds is calculated using /bin/cat rather than /bin/busybox . > */ > -static LIST_HEAD(tomoyo_alias_list); > -static DECLARE_RWSEM(tomoyo_alias_list_lock); > +LIST_HEAD(tomoyo_alias_list); > > /** > * tomoyo_update_alias_entry - Update "struct tomoyo_alias_entry" list. > @@ -637,8 +560,8 @@ static int tomoyo_update_alias_entry(con > } > if (!is_delete) > new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); > - down_write(&tomoyo_alias_list_lock); > - list_for_each_entry(ptr, &tomoyo_alias_list, list) { > + mutex_lock(&tomoyo_policy_lock); > + list_for_each_entry_rcu(ptr, &tomoyo_alias_list, list) { > if (ptr->original_name != saved_original_name || > ptr->aliased_name != saved_aliased_name) > continue; > @@ -651,11 +574,11 @@ static int tomoyo_update_alias_entry(con > saved_original_name = NULL; > new_entry->aliased_name = saved_aliased_name; > saved_aliased_name = NULL; > - list_add_tail(&new_entry->list, &tomoyo_alias_list); > + list_add_tail_rcu(&new_entry->list, &tomoyo_alias_list); > new_entry = NULL; > error = 0; > } > - up_write(&tomoyo_alias_list_lock); > + mutex_unlock(&tomoyo_policy_lock); > tomoyo_put_name(saved_original_name); > tomoyo_put_name(saved_aliased_name); > kfree(new_entry); > @@ -674,8 +597,7 @@ bool tomoyo_read_alias_policy(struct tom > struct list_head *pos; > bool done = true; > > - down_read(&tomoyo_alias_list_lock); > - list_for_each_cookie(pos, head->read_var2, &tomoyo_alias_list) { > + list_for_each_cookie_rcu(pos, head->read_var2, &tomoyo_alias_list) { > struct tomoyo_alias_entry *ptr; > > ptr = list_entry(pos, struct tomoyo_alias_entry, list); > @@ -687,7 +609,6 @@ bool tomoyo_read_alias_policy(struct tom > if (!done) > break; > } > - up_read(&tomoyo_alias_list_lock); > return done; > } > > @@ -731,52 +652,18 @@ struct tomoyo_domain_info *tomoyo_find_o > if (!saved_domainname) > return NULL; > new_domain = kmalloc(sizeof(*new_domain), GFP_KERNEL); > - down_write(&tomoyo_domain_list_lock); > + mutex_lock(&tomoyo_policy_lock); > domain = tomoyo_find_domain(domainname); > - if (domain) > - goto out; > - /* Can I reuse memory of deleted domain? */ > - list_for_each_entry(domain, &tomoyo_domain_list, list) { > - struct task_struct *p; > - struct tomoyo_acl_info *ptr; > - bool flag; > - if (!domain->is_deleted || > - domain->domainname != saved_domainname) > - continue; > - flag = false; > - read_lock(&tasklist_lock); > - for_each_process(p) { > - if (tomoyo_real_domain(p) != domain) > - continue; > - flag = true; > - break; > - } > - read_unlock(&tasklist_lock); > - if (flag) > - continue; > - list_for_each_entry(ptr, &domain->acl_info_list, list) { > - ptr->type |= TOMOYO_ACL_DELETED; > - } > - domain->ignore_global_allow_read = false; > - domain->domain_transition_failed = false; > - domain->profile = profile; > - domain->quota_warned = false; > - mb(); /* Avoid out-of-order execution. */ > - domain->is_deleted = false; > - goto out; > - } > - /* No memory reusable. Create using new memory. */ > - if (tomoyo_memory_ok(new_domain)) { > + if (!domain && tomoyo_memory_ok(new_domain)) { > domain = new_domain; > new_domain = NULL; > INIT_LIST_HEAD(&domain->acl_info_list); > domain->domainname = saved_domainname; > saved_domainname = NULL; > domain->profile = profile; > - list_add_tail(&domain->list, &tomoyo_domain_list); > + list_add_tail_rcu(&domain->list, &tomoyo_domain_list); > } > - out: > - up_write(&tomoyo_domain_list_lock); > + mutex_unlock(&tomoyo_policy_lock); > tomoyo_put_name(saved_domainname); > kfree(new_domain); > return domain; > @@ -788,6 +675,8 @@ struct tomoyo_domain_info *tomoyo_find_o > * @bprm: Pointer to "struct linux_binprm". > * > * Returns 0 on success, negative value otherwise. > + * > + * Caller holds srcu_read_lock(&tomoyo_ss). > */ > int tomoyo_find_next_domain(struct linux_binprm *bprm) > { > @@ -810,6 +699,7 @@ int tomoyo_find_next_domain(struct linux > struct tomoyo_path_info s; /* symlink name */ > struct tomoyo_path_info l; /* last name */ > static bool initialized; > + const int idx = srcu_read_lock(&tomoyo_ss); > > if (!tmp) > goto out; > @@ -848,8 +738,7 @@ int tomoyo_find_next_domain(struct linux > if (tomoyo_pathcmp(&r, &s)) { > struct tomoyo_alias_entry *ptr; > /* Is this program allowed to be called via symbolic links? */ > - down_read(&tomoyo_alias_list_lock); > - list_for_each_entry(ptr, &tomoyo_alias_list, list) { > + list_for_each_entry_rcu(ptr, &tomoyo_alias_list, list) { > if (ptr->is_deleted || > tomoyo_pathcmp(&r, ptr->original_name) || > tomoyo_pathcmp(&s, ptr->aliased_name)) > @@ -860,7 +749,6 @@ int tomoyo_find_next_domain(struct linux > tomoyo_fill_path_info(&r); > break; > } > - up_read(&tomoyo_alias_list_lock); > } > > /* Check execute permission. */ > @@ -891,9 +779,7 @@ int tomoyo_find_next_domain(struct linux > } > if (domain || strlen(new_domain_name) >= TOMOYO_MAX_PATHNAME_LEN) > goto done; > - down_read(&tomoyo_domain_list_lock); > domain = tomoyo_find_domain(new_domain_name); > - up_read(&tomoyo_domain_list_lock); > if (domain) > goto done; > if (is_enforce) > @@ -910,9 +796,12 @@ int tomoyo_find_next_domain(struct linux > else > old_domain->domain_transition_failed = true; > out: > + BUG_ON(bprm->cred->security); > if (!domain) > domain = old_domain; > + atomic_inc(&domain->users); > bprm->cred->security = domain; > + srcu_read_unlock(&tomoyo_ss, idx); > tomoyo_free(real_program_name); > tomoyo_free(symlink_program_name); > tomoyo_free(tmp); > --- security-testing-2.6.git.orig/security/tomoyo/file.c > +++ security-testing-2.6.git/security/tomoyo/file.c > @@ -14,56 +14,6 @@ > #include "realpath.h" > #define ACC_MODE(x) ("\000\004\002\006"[(x)&O_ACCMODE]) > > -/* > - * tomoyo_globally_readable_file_entry is a structure which is used for holding > - * "allow_read" entries. > - * It has following fields. > - * > - * (1) "list" which is linked to tomoyo_globally_readable_list . > - * (2) "filename" is a pathname which is allowed to open(O_RDONLY). > - * (3) "is_deleted" is a bool which is true if marked as deleted, false > - * otherwise. > - */ > -struct tomoyo_globally_readable_file_entry { > - struct list_head list; > - const struct tomoyo_path_info *filename; > - bool is_deleted; > -}; > - > -/* > - * tomoyo_pattern_entry is a structure which is used for holding > - * "tomoyo_pattern_list" entries. > - * It has following fields. > - * > - * (1) "list" which is linked to tomoyo_pattern_list . > - * (2) "pattern" is a pathname pattern which is used for converting pathnames > - * to pathname patterns during learning mode. > - * (3) "is_deleted" is a bool which is true if marked as deleted, false > - * otherwise. > - */ > -struct tomoyo_pattern_entry { > - struct list_head list; > - const struct tomoyo_path_info *pattern; > - bool is_deleted; > -}; > - > -/* > - * tomoyo_no_rewrite_entry is a structure which is used for holding > - * "deny_rewrite" entries. > - * It has following fields. > - * > - * (1) "list" which is linked to tomoyo_no_rewrite_list . > - * (2) "pattern" is a pathname which is by default not permitted to modify > - * already existing content. > - * (3) "is_deleted" is a bool which is true if marked as deleted, false > - * otherwise. > - */ > -struct tomoyo_no_rewrite_entry { > - struct list_head list; > - const struct tomoyo_path_info *pattern; > - bool is_deleted; > -}; > - > /* Keyword array for single path operations. */ > static const char *tomoyo_sp_keyword[TOMOYO_MAX_SINGLE_PATH_OPERATION] = { > [TOMOYO_TYPE_READ_WRITE_ACL] = "read/write", > @@ -159,8 +109,8 @@ static struct tomoyo_path_info *tomoyo_g > return NULL; > } > > -/* Lock for domain->acl_info_list. */ > -DECLARE_RWSEM(tomoyo_domain_acl_info_list_lock); > +/* Lock for modifying TOMOYO's policy. */ > +DEFINE_MUTEX(tomoyo_policy_lock); > > static int tomoyo_update_double_path_acl(const u8 type, const char *filename1, > const char *filename2, > @@ -195,8 +145,7 @@ static int tomoyo_update_single_path_acl > * given "allow_read /lib/libc-2.5.so" to the domain which current process > * belongs to. > */ > -static LIST_HEAD(tomoyo_globally_readable_list); > -static DECLARE_RWSEM(tomoyo_globally_readable_list_lock); > +LIST_HEAD(tomoyo_globally_readable_list); > > /** > * tomoyo_update_globally_readable_entry - Update "struct tomoyo_globally_readable_file_entry" list. > @@ -221,8 +170,8 @@ static int tomoyo_update_globally_readab > return -ENOMEM; > if (!is_delete) > new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); > - down_write(&tomoyo_globally_readable_list_lock); > - list_for_each_entry(ptr, &tomoyo_globally_readable_list, list) { > + mutex_lock(&tomoyo_policy_lock); > + list_for_each_entry_rcu(ptr, &tomoyo_globally_readable_list, list) { > if (ptr->filename != saved_filename) > continue; > ptr->is_deleted = is_delete; > @@ -232,11 +181,12 @@ static int tomoyo_update_globally_readab > if (!is_delete && error && tomoyo_memory_ok(new_entry)) { > new_entry->filename = saved_filename; > saved_filename = NULL; > - list_add_tail(&new_entry->list, &tomoyo_globally_readable_list); > + list_add_tail_rcu(&new_entry->list, > + &tomoyo_globally_readable_list); > new_entry = NULL; > error = 0; > } > - up_write(&tomoyo_globally_readable_list_lock); > + mutex_unlock(&tomoyo_policy_lock); > tomoyo_put_name(saved_filename); > kfree(new_entry); > return error; > @@ -248,21 +198,21 @@ static int tomoyo_update_globally_readab > * @filename: The filename to check. > * > * Returns true if any domain can open @filename for reading, false otherwise. > + * > + * Caller holds srcu_read_lock(&tomoyo_ss). > */ > static bool tomoyo_is_globally_readable_file(const struct tomoyo_path_info * > filename) > { > struct tomoyo_globally_readable_file_entry *ptr; > bool found = false; > - down_read(&tomoyo_globally_readable_list_lock); > - list_for_each_entry(ptr, &tomoyo_globally_readable_list, list) { > + list_for_each_entry_rcu(ptr, &tomoyo_globally_readable_list, list) { > if (!ptr->is_deleted && > tomoyo_path_matches_pattern(filename, ptr->filename)) { > found = true; > break; > } > } > - up_read(&tomoyo_globally_readable_list_lock); > return found; > } > > @@ -291,9 +241,8 @@ bool tomoyo_read_globally_readable_polic > struct list_head *pos; > bool done = true; > > - down_read(&tomoyo_globally_readable_list_lock); > - list_for_each_cookie(pos, head->read_var2, > - &tomoyo_globally_readable_list) { > + list_for_each_cookie_rcu(pos, head->read_var2, > + &tomoyo_globally_readable_list) { > struct tomoyo_globally_readable_file_entry *ptr; > ptr = list_entry(pos, > struct tomoyo_globally_readable_file_entry, > @@ -305,7 +254,6 @@ bool tomoyo_read_globally_readable_polic > if (!done) > break; > } > - up_read(&tomoyo_globally_readable_list_lock); > return done; > } > > @@ -338,8 +286,7 @@ bool tomoyo_read_globally_readable_polic > * which pretends as if /proc/self/ is not a symlink; so that we can forbid > * current process from accessing other process's information. > */ > -static LIST_HEAD(tomoyo_pattern_list); > -static DECLARE_RWSEM(tomoyo_pattern_list_lock); > +LIST_HEAD(tomoyo_pattern_list); > > /** > * tomoyo_update_file_pattern_entry - Update "struct tomoyo_pattern_entry" list. > @@ -364,8 +311,8 @@ static int tomoyo_update_file_pattern_en > return -ENOMEM; > if (!is_delete) > new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); > - down_write(&tomoyo_pattern_list_lock); > - list_for_each_entry(ptr, &tomoyo_pattern_list, list) { > + mutex_lock(&tomoyo_policy_lock); > + list_for_each_entry_rcu(ptr, &tomoyo_pattern_list, list) { > if (saved_pattern != ptr->pattern) > continue; > ptr->is_deleted = is_delete; > @@ -375,11 +322,11 @@ static int tomoyo_update_file_pattern_en > if (!is_delete && error && tomoyo_memory_ok(new_entry)) { > new_entry->pattern = saved_pattern; > saved_pattern = NULL; > - list_add_tail(&new_entry->list, &tomoyo_pattern_list); > + list_add_tail_rcu(&new_entry->list, &tomoyo_pattern_list); > new_entry = NULL; > error = 0; > } > - up_write(&tomoyo_pattern_list_lock); > + mutex_unlock(&tomoyo_policy_lock); > tomoyo_put_name(saved_pattern); > kfree(new_entry); > return error; > @@ -391,6 +338,8 @@ static int tomoyo_update_file_pattern_en > * @filename: The filename to find patterned pathname. > * > * Returns pointer to pathname pattern if matched, @filename otherwise. > + * > + * Caller holds srcu_read_lock(&tomoyo_ss). > */ > static const struct tomoyo_path_info * > tomoyo_get_file_pattern(const struct tomoyo_path_info *filename) > @@ -398,8 +347,7 @@ tomoyo_get_file_pattern(const struct tom > struct tomoyo_pattern_entry *ptr; > const struct tomoyo_path_info *pattern = NULL; > > - down_read(&tomoyo_pattern_list_lock); > - list_for_each_entry(ptr, &tomoyo_pattern_list, list) { > + list_for_each_entry_rcu(ptr, &tomoyo_pattern_list, list) { > if (ptr->is_deleted) > continue; > if (!tomoyo_path_matches_pattern(filename, ptr->pattern)) > @@ -412,7 +360,6 @@ tomoyo_get_file_pattern(const struct tom > break; > } > } > - up_read(&tomoyo_pattern_list_lock); > if (pattern) > filename = pattern; > return filename; > @@ -443,8 +390,7 @@ bool tomoyo_read_file_pattern(struct tom > struct list_head *pos; > bool done = true; > > - down_read(&tomoyo_pattern_list_lock); > - list_for_each_cookie(pos, head->read_var2, &tomoyo_pattern_list) { > + list_for_each_cookie_rcu(pos, head->read_var2, &tomoyo_pattern_list) { > struct tomoyo_pattern_entry *ptr; > ptr = list_entry(pos, struct tomoyo_pattern_entry, list); > if (ptr->is_deleted) > @@ -454,7 +400,6 @@ bool tomoyo_read_file_pattern(struct tom > if (!done) > break; > } > - up_read(&tomoyo_pattern_list_lock); > return done; > } > > @@ -487,8 +432,7 @@ bool tomoyo_read_file_pattern(struct tom > * " (deleted)" suffix if the file is already unlink()ed; so that we don't > * need to worry whether the file is already unlink()ed or not. > */ > -static LIST_HEAD(tomoyo_no_rewrite_list); > -static DECLARE_RWSEM(tomoyo_no_rewrite_list_lock); > +LIST_HEAD(tomoyo_no_rewrite_list); > > /** > * tomoyo_update_no_rewrite_entry - Update "struct tomoyo_no_rewrite_entry" list. > @@ -513,8 +457,8 @@ static int tomoyo_update_no_rewrite_entr > return -ENOMEM; > if (!is_delete) > new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); > - down_write(&tomoyo_no_rewrite_list_lock); > - list_for_each_entry(ptr, &tomoyo_no_rewrite_list, list) { > + mutex_lock(&tomoyo_policy_lock); > + list_for_each_entry_rcu(ptr, &tomoyo_no_rewrite_list, list) { > if (ptr->pattern != saved_pattern) > continue; > ptr->is_deleted = is_delete; > @@ -524,11 +468,11 @@ static int tomoyo_update_no_rewrite_entr > if (!is_delete && error && tomoyo_memory_ok(new_entry)) { > new_entry->pattern = saved_pattern; > saved_pattern = NULL; > - list_add_tail(&new_entry->list, &tomoyo_no_rewrite_list); > + list_add_tail_rcu(&new_entry->list, &tomoyo_no_rewrite_list); > new_entry = NULL; > error = 0; > } > - up_write(&tomoyo_no_rewrite_list_lock); > + mutex_unlock(&tomoyo_policy_lock); > tomoyo_put_name(saved_pattern); > return error; > } > @@ -540,14 +484,15 @@ static int tomoyo_update_no_rewrite_entr > * > * Returns true if @filename is specified by "deny_rewrite" directive, > * false otherwise. > + * > + * Caller holds srcu_read_lock(&tomoyo_ss). > */ > static bool tomoyo_is_no_rewrite_file(const struct tomoyo_path_info *filename) > { > struct tomoyo_no_rewrite_entry *ptr; > bool found = false; > > - down_read(&tomoyo_no_rewrite_list_lock); > - list_for_each_entry(ptr, &tomoyo_no_rewrite_list, list) { > + list_for_each_entry_rcu(ptr, &tomoyo_no_rewrite_list, list) { > if (ptr->is_deleted) > continue; > if (!tomoyo_path_matches_pattern(filename, ptr->pattern)) > @@ -555,7 +500,6 @@ static bool tomoyo_is_no_rewrite_file(co > found = true; > break; > } > - up_read(&tomoyo_no_rewrite_list_lock); > return found; > } > > @@ -584,8 +528,8 @@ bool tomoyo_read_no_rewrite_policy(struc > struct list_head *pos; > bool done = true; > > - down_read(&tomoyo_no_rewrite_list_lock); > - list_for_each_cookie(pos, head->read_var2, &tomoyo_no_rewrite_list) { > + list_for_each_cookie_rcu(pos, head->read_var2, > + &tomoyo_no_rewrite_list) { > struct tomoyo_no_rewrite_entry *ptr; > ptr = list_entry(pos, struct tomoyo_no_rewrite_entry, list); > if (ptr->is_deleted) > @@ -595,7 +539,6 @@ bool tomoyo_read_no_rewrite_policy(struc > if (!done) > break; > } > - up_read(&tomoyo_no_rewrite_list_lock); > return done; > } > > @@ -660,9 +603,9 @@ static int tomoyo_check_single_path_acl2 > { > struct tomoyo_acl_info *ptr; > int error = -EPERM; > + const int idx = srcu_read_lock(&tomoyo_ss); > > - down_read(&tomoyo_domain_acl_info_list_lock); > - list_for_each_entry(ptr, &domain->acl_info_list, list) { > + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { > struct tomoyo_single_path_acl_record *acl; > if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_SINGLE_PATH_ACL) > continue; > @@ -680,7 +623,7 @@ static int tomoyo_check_single_path_acl2 > error = 0; > break; > } > - up_read(&tomoyo_domain_acl_info_list_lock); > + srcu_read_unlock(&tomoyo_ss, idx); > return error; > } > > @@ -846,10 +789,10 @@ static int tomoyo_update_single_path_acl > return -ENOMEM; > if (!is_delete) > new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); > - down_write(&tomoyo_domain_acl_info_list_lock); > + mutex_lock(&tomoyo_policy_lock); > if (is_delete) > goto delete; > - list_for_each_entry(ptr, &domain->acl_info_list, list) { > + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { > struct tomoyo_single_path_acl_record *acl; > if (tomoyo_acl_type1(ptr) != TOMOYO_TYPE_SINGLE_PATH_ACL) > continue; > @@ -877,13 +820,14 @@ static int tomoyo_update_single_path_acl > new_entry->perm |= rw_mask; > new_entry->filename = saved_filename; > saved_filename = NULL; > - list_add_tail(&new_entry->head.list, &domain->acl_info_list); > + list_add_tail_rcu(&new_entry->head.list, > + &domain->acl_info_list); > new_entry = NULL; > error = 0; > } > goto out; > delete: > - list_for_each_entry(ptr, &domain->acl_info_list, list) { > + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { > struct tomoyo_single_path_acl_record *acl; > if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_SINGLE_PATH_ACL) > continue; > @@ -902,7 +846,7 @@ static int tomoyo_update_single_path_acl > break; > } > out: > - up_write(&tomoyo_domain_acl_info_list_lock); > + mutex_unlock(&tomoyo_policy_lock); > tomoyo_put_name(saved_filename); > kfree(new_entry); > return error; > @@ -945,10 +889,10 @@ static int tomoyo_update_double_path_acl > } > if (!is_delete) > new_entry = kmalloc(sizeof(*new_entry), GFP_KERNEL); > - down_write(&tomoyo_domain_acl_info_list_lock); > + mutex_lock(&tomoyo_policy_lock); > if (is_delete) > goto delete; > - list_for_each_entry(ptr, &domain->acl_info_list, list) { > + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { > struct tomoyo_double_path_acl_record *acl; > if (tomoyo_acl_type1(ptr) != TOMOYO_TYPE_DOUBLE_PATH_ACL) > continue; > @@ -973,13 +917,14 @@ static int tomoyo_update_double_path_acl > saved_filename1 = NULL; > new_entry->filename2 = saved_filename2; > saved_filename2 = NULL; > - list_add_tail(&new_entry->head.list, &domain->acl_info_list); > + list_add_tail_rcu(&new_entry->head.list, > + &domain->acl_info_list); > new_entry = NULL; > error = 0; > } > goto out; > delete: > - list_for_each_entry(ptr, &domain->acl_info_list, list) { > + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { > struct tomoyo_double_path_acl_record *acl; > if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_DOUBLE_PATH_ACL) > continue; > @@ -995,7 +940,7 @@ static int tomoyo_update_double_path_acl > break; > } > out: > - up_write(&tomoyo_domain_acl_info_list_lock); > + mutex_unlock(&tomoyo_policy_lock); > tomoyo_put_name(saved_filename1); > tomoyo_put_name(saved_filename2); > kfree(new_entry); > @@ -1040,11 +985,12 @@ static int tomoyo_check_double_path_acl( > struct tomoyo_acl_info *ptr; > const u8 perm = 1 << type; > int error = -EPERM; > + int idx; > > if (!tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE)) > return 0; > - down_read(&tomoyo_domain_acl_info_list_lock); > - list_for_each_entry(ptr, &domain->acl_info_list, list) { > + idx = srcu_read_lock(&tomoyo_ss); > + list_for_each_entry_rcu(ptr, &domain->acl_info_list, list) { > struct tomoyo_double_path_acl_record *acl; > if (tomoyo_acl_type2(ptr) != TOMOYO_TYPE_DOUBLE_PATH_ACL) > continue; > @@ -1059,7 +1005,7 @@ static int tomoyo_check_double_path_acl( > error = 0; > break; > } > - up_read(&tomoyo_domain_acl_info_list_lock); > + srcu_read_unlock(&tomoyo_ss, idx); > return error; > } > > @@ -1169,6 +1115,7 @@ int tomoyo_check_open_permission(struct > struct tomoyo_path_info *buf; > const u8 mode = tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE); > const bool is_enforce = (mode == 3); > + int idx; > > if (!mode || !path->mnt) > return 0; > @@ -1184,6 +1131,7 @@ int tomoyo_check_open_permission(struct > if (!buf) > goto out; > error = 0; > + idx = srcu_read_lock(&tomoyo_ss); > /* > * If the filename is specified by "deny_rewrite" keyword, > * we need to check "allow_rewrite" permission when the filename is not > @@ -1203,6 +1151,7 @@ int tomoyo_check_open_permission(struct > error = tomoyo_check_single_path_permission2(domain, > TOMOYO_TYPE_TRUNCATE_ACL, > buf, mode); > + srcu_read_unlock(&tomoyo_ss, idx); > out: > tomoyo_free(buf); > if (!is_enforce) > @@ -1226,6 +1175,7 @@ int tomoyo_check_1path_perm(struct tomoy > struct tomoyo_path_info *buf; > const u8 mode = tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE); > const bool is_enforce = (mode == 3); > + int idx; > > if (!mode || !path->mnt) > return 0; > @@ -1243,8 +1193,10 @@ int tomoyo_check_1path_perm(struct tomoy > tomoyo_fill_path_info(buf); > } > } > + idx = srcu_read_lock(&tomoyo_ss); > error = tomoyo_check_single_path_permission2(domain, operation, buf, > mode); > + srcu_read_unlock(&tomoyo_ss, idx); > out: > tomoyo_free(buf); > if (!is_enforce) > @@ -1267,19 +1219,23 @@ int tomoyo_check_rewrite_permission(stru > const u8 mode = tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE); > const bool is_enforce = (mode == 3); > struct tomoyo_path_info *buf; > + int idx; > > if (!mode || !filp->f_path.mnt) > return 0; > buf = tomoyo_get_path(&filp->f_path); > if (!buf) > goto out; > + idx = srcu_read_lock(&tomoyo_ss); > if (!tomoyo_is_no_rewrite_file(buf)) { > error = 0; > - goto out; > + goto ok; > } > error = tomoyo_check_single_path_permission2(domain, > TOMOYO_TYPE_REWRITE_ACL, > buf, mode); > + ok: > + srcu_read_unlock(&tomoyo_ss, idx); > out: > tomoyo_free(buf); > if (!is_enforce) > @@ -1306,6 +1262,7 @@ int tomoyo_check_2path_perm(struct tomoy > const u8 mode = tomoyo_check_flags(domain, TOMOYO_MAC_FOR_FILE); > const bool is_enforce = (mode == 3); > const char *msg; > + int idx; > > if (!mode || !path1->mnt || !path2->mnt) > return 0; > @@ -1329,10 +1286,11 @@ int tomoyo_check_2path_perm(struct tomoy > } > } > } > + idx = srcu_read_lock(&tomoyo_ss); > error = tomoyo_check_double_path_acl(domain, operation, buf1, buf2); > msg = tomoyo_dp2keyword(operation); > if (!error) > - goto out; > + goto ok; > if (tomoyo_verbose_mode(domain)) > printk(KERN_WARNING "TOMOYO-%s: Access '%s %s %s' " > "denied for %s\n", tomoyo_get_msg(is_enforce), > @@ -1344,6 +1302,8 @@ int tomoyo_check_2path_perm(struct tomoy > tomoyo_update_double_path_acl(operation, name1, name2, domain, > false); > } > + ok: > + srcu_read_unlock(&tomoyo_ss, idx); > out: > tomoyo_free(buf1); > tomoyo_free(buf2); > --- security-testing-2.6.git.orig/security/tomoyo/realpath.c > +++ security-testing-2.6.git/security/tomoyo/realpath.c > @@ -1,3 +1,4 @@ > + > /* > * security/tomoyo/realpath.c > * > @@ -15,6 +16,9 @@ > #include <linux/fs_struct.h> > #include "common.h" > #include "realpath.h" > +#include "tomoyo.h" > + > +struct srcu_struct tomoyo_ss; > > /** > * tomoyo_encode: Convert binary string to ascii string. > @@ -223,6 +227,17 @@ bool tomoyo_memory_ok(void *ptr) > return false; > } > > +/** > + * tomoyo_free_element - Free memory for elements. > + * > + * @ptr: Pointer to allocated memory. > + */ > +static void tomoyo_free_element(void *ptr) > +{ > + atomic_sub(ksize(ptr), &tomoyo_allocated_memory_for_elements); > + kfree(ptr); > +} > + > /* Memory allocated for string data in bytes. */ > static atomic_t tomoyo_allocated_memory_for_savename; > /* Quota for holding string data in bytes. */ > @@ -238,15 +253,10 @@ static unsigned int tomoyo_quota_for_sav > /* > * tomoyo_name_entry is a structure which is used for linking > * "struct tomoyo_path_info" into tomoyo_name_list . > - * > - * Since tomoyo_name_list manages a list of strings which are shared by > - * multiple processes (whereas "struct tomoyo_path_info" inside > - * "struct tomoyo_path_info_with_data" is not shared), a reference counter will > - * be added to "struct tomoyo_name_entry" rather than "struct tomoyo_path_info" > - * when TOMOYO starts supporting garbage collector. > */ > struct tomoyo_name_entry { > struct list_head list; > + atomic_t users; > struct tomoyo_path_info entry; > }; > > @@ -287,10 +297,11 @@ const struct tomoyo_path_info *tomoyo_ge > entry = kmalloc(sizeof(*entry) + len, GFP_KERNEL); > allocated_len = entry ? ksize(entry) : 0; > mutex_lock(&tomoyo_name_list_lock); > - list_for_each_entry(ptr, &tomoyo_name_list[hash % TOMOYO_MAX_HASH], > - list) { > + list_for_each_entry_rcu(ptr, &tomoyo_name_list[hash % TOMOYO_MAX_HASH], > + list) { > if (hash != ptr->entry.hash || strcmp(name, ptr->entry.name)) > continue; > + atomic_inc(&ptr->users); > error = 0; > break; > } > @@ -305,8 +316,9 @@ const struct tomoyo_path_info *tomoyo_ge > ptr->entry.name = ((char *) ptr) + sizeof(*ptr); > memmove((char *) ptr->entry.name, name, len); > tomoyo_fill_path_info(&ptr->entry); > - list_add_tail(&ptr->list, > - &tomoyo_name_list[hash % TOMOYO_MAX_HASH]); > + atomic_set(&ptr->users, 1); > + list_add_tail_rcu(&ptr->list, > + &tomoyo_name_list[hash % TOMOYO_MAX_HASH]); > entry = NULL; > error = 0; > } > @@ -321,6 +333,31 @@ const struct tomoyo_path_info *tomoyo_ge > } > > /** > + * tomoyo_put_name - Delete shared memory for string data. > + * > + * @ptr: Pointer to "struct tomoyo_path_info". > + */ > +void tomoyo_put_name(const struct tomoyo_path_info *name) > +{ > + struct tomoyo_name_entry *ptr; > + bool can_delete = false; > + > + if (!name) > + return; > + ptr = container_of(name, struct tomoyo_name_entry, entry); > + mutex_lock(&tomoyo_name_list_lock); > + if (atomic_dec_and_test(&ptr->users)) { > + list_del(&ptr->list); > + can_delete = true; > + } > + mutex_unlock(&tomoyo_name_list_lock); > + if (can_delete) { > + atomic_sub(ksize(ptr), &tomoyo_allocated_memory_for_savename); > + kfree(ptr); > + } > +} > + > +/** > * tomoyo_realpath_init - Initialize realpath related code. > */ > void __init tomoyo_realpath_init(void) > @@ -331,12 +368,14 @@ void __init tomoyo_realpath_init(void) > for (i = 0; i < TOMOYO_MAX_HASH; i++) > INIT_LIST_HEAD(&tomoyo_name_list[i]); > INIT_LIST_HEAD(&tomoyo_kernel_domain.acl_info_list); > + if (init_srcu_struct(&tomoyo_ss)) > + panic("Can't initialize tomoyo_ss"); > tomoyo_kernel_domain.domainname = tomoyo_get_name(TOMOYO_ROOT_NAME); > - list_add_tail(&tomoyo_kernel_domain.list, &tomoyo_domain_list); > - down_read(&tomoyo_domain_list_lock); > + list_add_tail_rcu(&tomoyo_kernel_domain.list, &tomoyo_domain_list); > + i = srcu_read_lock(&tomoyo_ss); > if (tomoyo_find_domain(TOMOYO_ROOT_NAME) != &tomoyo_kernel_domain) > panic("Can't register tomoyo_kernel_domain"); > - up_read(&tomoyo_domain_list_lock); > + srcu_read_unlock(&tomoyo_ss, i); > } > > /* Memory allocated for temporary purpose. */ > @@ -431,3 +470,311 @@ int tomoyo_write_memory_quota(struct tom > tomoyo_quota_for_elements = size; > return 0; > } > + > +/* Garbage collecter functions */ > + > +static inline void tomoyo_gc_del_domain_initializer > +(struct tomoyo_domain_initializer_entry *ptr) > +{ > + tomoyo_put_name(ptr->domainname); > + tomoyo_put_name(ptr->program); > +} > + > +static inline void tomoyo_gc_del_domain_keeper > +(struct tomoyo_domain_keeper_entry *ptr) > +{ > + tomoyo_put_name(ptr->domainname); > + tomoyo_put_name(ptr->program); > +} > + > +static inline void tomoyo_gc_del_alias(struct tomoyo_alias_entry *ptr) > +{ > + tomoyo_put_name(ptr->original_name); > + tomoyo_put_name(ptr->aliased_name); > +} > + > +static inline void tomoyo_gc_del_readable > +(struct tomoyo_globally_readable_file_entry *ptr) > +{ > + tomoyo_put_name(ptr->filename); > +} > + > +static inline void tomoyo_gc_del_pattern(struct tomoyo_pattern_entry *ptr) > +{ > + tomoyo_put_name(ptr->pattern); > +} > + > +static inline void tomoyo_gc_del_no_rewrite > +(struct tomoyo_no_rewrite_entry *ptr) > +{ > + tomoyo_put_name(ptr->pattern); > +} > + > +static inline void tomoyo_gc_del_manager > +(struct tomoyo_policy_manager_entry *ptr) > +{ > + tomoyo_put_name(ptr->manager); > +} > + > +static void tomoyo_gc_del_acl(struct tomoyo_acl_info *acl) > +{ > + switch (tomoyo_acl_type1(acl)) { > + struct tomoyo_single_path_acl_record *acl1; > + struct tomoyo_double_path_acl_record *acl2; > + case TOMOYO_TYPE_SINGLE_PATH_ACL: > + acl1 = container_of(acl, struct tomoyo_single_path_acl_record, > + head); > + tomoyo_put_name(acl1->filename); > + break; > + case TOMOYO_TYPE_DOUBLE_PATH_ACL: > + acl2 = container_of(acl, struct tomoyo_double_path_acl_record, > + head); > + tomoyo_put_name(acl2->filename1); > + tomoyo_put_name(acl2->filename2); > + break; > + } > +} > + > +static bool tomoyo_gc_del_domain(struct tomoyo_domain_info *domain) > +{ > + struct tomoyo_acl_info *acl; > + struct tomoyo_acl_info *tmp; > + /* > + * We need to recheck domain->users because > + * tomoyo_find_next_domain() increments it. > + */ > + if (atomic_read(&domain->users)) > + return false; > + /* Delete all entries in this domain. */ > + list_for_each_entry_safe(acl, tmp, &domain->acl_info_list, list) { > + list_del_rcu(&acl->list); > + tomoyo_gc_del_acl(acl); > + tomoyo_free_element(acl); > + } > + tomoyo_put_name(domain->domainname); > + return true; > +} > + > +enum tomoyo_gc_id { > + TOMOYO_ID_DOMAIN_INITIALIZER, > + TOMOYO_ID_DOMAIN_KEEPER, > + TOMOYO_ID_ALIAS, > + TOMOYO_ID_GLOBALLY_READABLE, > + TOMOYO_ID_PATTERN, > + TOMOYO_ID_NO_REWRITE, > + TOMOYO_ID_MANAGER, > + TOMOYO_ID_ACL, > + TOMOYO_ID_DOMAIN > +}; > + > +struct tomoyo_gc_entry { > + struct list_head list; > + int type; > + void *element; > +}; > + > + > +/* Caller holds tomoyo_policy_lock mutex. */ > +static bool tomoyo_add_to_gc(const int type, void *element, > + struct list_head *head) > +{ > + struct tomoyo_gc_entry *entry = kmalloc(sizeof(*entry), GFP_ATOMIC); > + if (!entry) > + return false; > + entry->type = type; > + entry->element = element; > + list_add(&entry->list, head); > + return true; > +} > + > +/** > + * tomoyo_gc_thread_main - Garbage collector thread for TOMOYO. > + * > + * @unused: Not used. > + * > + * This function is exclusively executed. > + */ > +static int tomoyo_gc_thread_main(void *unused) > +{ > + static DEFINE_MUTEX(tomoyo_gc_mutex); > + static LIST_HEAD(tomoyo_gc_queue); > + if (!mutex_trylock(&tomoyo_gc_mutex)) > + return 0; > + > + mutex_lock(&tomoyo_policy_lock); > + { > + struct tomoyo_globally_readable_file_entry *ptr; > + list_for_each_entry_rcu(ptr, &tomoyo_globally_readable_list, > + list) { > + if (!ptr->is_deleted) > + continue; > + if (tomoyo_add_to_gc(TOMOYO_ID_GLOBALLY_READABLE, ptr, > + &tomoyo_gc_queue)) > + list_del_rcu(&ptr->list); > + else > + break; > + } > + } > + { > + struct tomoyo_pattern_entry *ptr; > + list_for_each_entry_rcu(ptr, &tomoyo_pattern_list, list) { > + if (!ptr->is_deleted) > + continue; > + if (tomoyo_add_to_gc(TOMOYO_ID_PATTERN, ptr, > + &tomoyo_gc_queue)) > + list_del_rcu(&ptr->list); > + else > + break; > + } > + } > + { > + struct tomoyo_no_rewrite_entry *ptr; > + list_for_each_entry_rcu(ptr, &tomoyo_no_rewrite_list, list) { > + if (!ptr->is_deleted) > + continue; > + if (tomoyo_add_to_gc(TOMOYO_ID_NO_REWRITE, ptr, > + &tomoyo_gc_queue)) > + list_del_rcu(&ptr->list); > + else > + break; > + } > + } > + { > + struct tomoyo_domain_initializer_entry *ptr; > + list_for_each_entry_rcu(ptr, &tomoyo_domain_initializer_list, > + list) { > + if (!ptr->is_deleted) > + continue; > + if (tomoyo_add_to_gc(TOMOYO_ID_DOMAIN_INITIALIZER, > + ptr, &tomoyo_gc_queue)) > + list_del_rcu(&ptr->list); > + else > + break; > + } > + } > + { > + struct tomoyo_domain_keeper_entry *ptr; > + list_for_each_entry_rcu(ptr, &tomoyo_domain_keeper_list, > + list) { > + if (!ptr->is_deleted) > + continue; > + if (tomoyo_add_to_gc(TOMOYO_ID_DOMAIN_KEEPER, ptr, > + &tomoyo_gc_queue)) > + list_del_rcu(&ptr->list); > + else > + break; > + } > + } > + { > + struct tomoyo_alias_entry *ptr; > + list_for_each_entry_rcu(ptr, &tomoyo_alias_list, list) { > + if (!ptr->is_deleted) > + continue; > + if (tomoyo_add_to_gc(TOMOYO_ID_ALIAS, ptr, > + &tomoyo_gc_queue)) > + list_del_rcu(&ptr->list); > + else > + break; > + } > + } > + { > + struct tomoyo_policy_manager_entry *ptr; > + list_for_each_entry_rcu(ptr, &tomoyo_policy_manager_list, > + list) { > + if (!ptr->is_deleted) > + continue; > + if (tomoyo_add_to_gc(TOMOYO_ID_MANAGER, ptr, > + &tomoyo_gc_queue)) > + list_del_rcu(&ptr->list); > + else > + break; > + } > + } > + { > + struct tomoyo_domain_info *domain; > + list_for_each_entry_rcu(domain, &tomoyo_domain_list, list) { > + struct tomoyo_acl_info *acl; > + list_for_each_entry_rcu(acl, &domain->acl_info_list, > + list) { > + if (!(acl->type & TOMOYO_ACL_DELETED)) > + continue; > + if (tomoyo_add_to_gc(TOMOYO_ID_ACL, acl, > + &tomoyo_gc_queue)) > + list_del_rcu(&acl->list); > + else > + break; > + } > + if (domain->is_deleted && > + !atomic_read(&domain->users)) { > + if (tomoyo_add_to_gc(TOMOYO_ID_DOMAIN, domain, > + &tomoyo_gc_queue)) > + list_del_rcu(&domain->list); > + else > + break; > + } > + } > + } > + mutex_unlock(&tomoyo_policy_lock); > + if (list_empty(&tomoyo_gc_queue)) > + goto done; > + synchronize_srcu(&tomoyo_ss); > + { > + struct tomoyo_gc_entry *p; > + struct tomoyo_gc_entry *tmp; > + list_for_each_entry_safe(p, tmp, &tomoyo_gc_queue, list) { > + switch (p->type) { > + case TOMOYO_ID_DOMAIN_INITIALIZER: > + tomoyo_gc_del_domain_initializer(p->element); > + break; > + case TOMOYO_ID_DOMAIN_KEEPER: > + tomoyo_gc_del_domain_keeper(p->element); > + break; > + case TOMOYO_ID_ALIAS: > + tomoyo_gc_del_alias(p->element); > + break; > + case TOMOYO_ID_GLOBALLY_READABLE: > + tomoyo_gc_del_readable(p->element); > + break; > + case TOMOYO_ID_PATTERN: > + tomoyo_gc_del_pattern(p->element); > + break; > + case TOMOYO_ID_NO_REWRITE: > + tomoyo_gc_del_no_rewrite(p->element); > + break; > + case TOMOYO_ID_MANAGER: > + tomoyo_gc_del_manager(p->element); > + break; > + case TOMOYO_ID_ACL: > + tomoyo_gc_del_acl(p->element); > + break; > + case TOMOYO_ID_DOMAIN: > + if (!tomoyo_gc_del_domain(p->element)) > + continue; > + break; > + } > + tomoyo_free_element(p->element); > + list_del(&p->list); > + kfree(p); > + } > + } > + done: > + mutex_unlock(&tomoyo_gc_mutex); > + return 0; > +} > + > +/** > + * tomoyo_gc_thread - Garbage collector thread for TOMOYO. > + * > + * @unused: Not used. > + */ > +int tomoyo_gc_thread(void *unused) > +{ > + /* > + * Maybe this thread should be created and terminated as needed > + * rather than created upon boot and living forever... > + */ > + while (1) { > + msleep(30000); > + tomoyo_gc_thread_main(unused); > + } > +} > --- security-testing-2.6.git.orig/security/tomoyo/realpath.h > +++ security-testing-2.6.git/security/tomoyo/realpath.h > @@ -44,10 +44,7 @@ bool tomoyo_memory_ok(void *ptr); > * The RAM is shared, so NEVER try to modify or kfree() the returned name. > */ > const struct tomoyo_path_info *tomoyo_get_name(const char *name); > -static inline void tomoyo_put_name(const struct tomoyo_path_info *name) > -{ > - /* It's a dummy so far. */ > -} > +void tomoyo_put_name(const struct tomoyo_path_info *name); > > /* Allocate memory for temporary use (e.g. permission checks). */ > void *tomoyo_alloc(const size_t size); > --- security-testing-2.6.git.orig/security/tomoyo/tomoyo.c > +++ security-testing-2.6.git/security/tomoyo/tomoyo.c > @@ -22,9 +22,19 @@ static int tomoyo_cred_prepare(struct cr > * we don't need to duplicate. > */ > new->security = old->security; > + if (new->security) > + atomic_inc(&((struct tomoyo_domain_info *) > + new->security)->users); > return 0; > } > > +static void tomoyo_cred_free(struct cred *cred) > +{ > + struct tomoyo_domain_info *domain = cred->security; > + if (domain) > + atomic_dec(&domain->users); > +} > + > static int tomoyo_bprm_set_creds(struct linux_binprm *bprm) > { > int rc; > @@ -49,7 +59,11 @@ static int tomoyo_bprm_set_creds(struct > * Tell tomoyo_bprm_check_security() is called for the first time of an > * execve operation. > */ > - bprm->cred->security = NULL; > + if (bprm->cred->security) { > + atomic_dec(&((struct tomoyo_domain_info *) > + bprm->cred->security)->users); > + bprm->cred->security = NULL; > + } > return 0; > } > > @@ -263,6 +277,7 @@ static int tomoyo_dentry_open(struct fil > static struct security_operations tomoyo_security_ops = { > .name = "tomoyo", > .cred_prepare = tomoyo_cred_prepare, > + .cred_free = tomoyo_cred_free, > .bprm_set_creds = tomoyo_bprm_set_creds, > .bprm_check_security = tomoyo_bprm_check_security, > #ifdef CONFIG_SYSCTL > @@ -291,6 +306,7 @@ static int __init tomoyo_init(void) > panic("Failure registering TOMOYO Linux"); > printk(KERN_INFO "TOMOYO Linux initialized\n"); > cred->security = &tomoyo_kernel_domain; > + atomic_inc(&tomoyo_kernel_domain.users); > tomoyo_realpath_init(); > return 0; > } ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] TOMOYO: Add garbage collector support. (v3) 2009-06-18 5:34 ` Tetsuo Handa 2009-06-18 6:45 ` [PATCH 3/3] TOMOYO: Add SRCU based garbage collector Tetsuo Handa @ 2009-06-18 15:28 ` Paul E. McKenney 2009-06-19 4:57 ` Tetsuo Handa 1 sibling, 1 reply; 16+ messages in thread From: Paul E. McKenney @ 2009-06-18 15:28 UTC (permalink / raw) To: Tetsuo Handa; +Cc: linux-security-module, linux-kernel On Thu, Jun 18, 2009 at 02:34:42PM +0900, Tetsuo Handa wrote: > Hello. > > Paul E. McKenney wrote: > > Consider the following sequence of events: > > > > o CPU 0 picks up users_counter_idx int local variable idx. > > Let's assume that the value is zero. > > > > o CPU 0 is now preempted, interrupted, or otherwise delayed. > > > > o CPU 1 starts garbage collection, finding some elements to > > delete, thus setting "element_deleted" to true. > > > > o CPU 1 continues garbage collection, inverting the value of > > users_counter_idx, so that the value is now one, waiting > > for the value-zero readers, and freeing up the old elements. [1] > > o CPU 0 continues execution, first atomically incrementing > > users_counter[0], then traversing the list, possibly sleeping. > > > > o CPU 2 starts a new round of garbage collection, again finding > > some elements to delete, and thus again setting > > "elements_deleted" to true. One of the elements deleted > > is the one that CPU 0 is currently referencing while asleep. > > > No. CPU 2 can't start a new round of GC because GC function is exclusively > executed because of gc_mutex mutex. But CPU 1 would have released gc_mutex back at time [1], right? > > o CPU 2 continues garbage collection, inverting the value of > > users_counter_idx, so that the value is now zero, waiting > > for the value-one readers, and freeing up the old elements. > > Note that CPU 0 is a value-zero reader, so that CPU 2 will > > not wait on it. > > > > CPU 2 therefore kfree()s the element that CPU 0 is currently > > referencing. > > > CPU 2 won't continue GC, for CPU 2 can't start a new round of GC. I still don't see why CPU 0 would not have released gc_mutex back at point [1]. > > o CPU 0 wakes up, and suffers possibly fatal disappointment upon > > attempting to reference an element that has been freed -- and, > > worse yet, possibly re-allocated as some other type of > > structure. > > > CPU 0 won't suffer, for first round of GC (by CPU 1) prevents CPU 2 from > starting a new round of GC. Why would CPU 1 be unable to complete its round of GC, thus releasing gc_mutex, thus allowing CPU 2 from starting a new one? For that matter, CPU 1 could start a new round, correct? > > Or am I missing something in your pseudocode? > I think you missed that GC function is executed exclusively. > > The race between readers and GC is avoided as below. If you can tell me why CPU 1 cannot release gc_mutex, I will look at the following. Until then, I will stand by my scenario above. ;-) > (a-1) A reader reads users_counter_idx and saves to r_idx > (a-2) GC removes element from the list using RCU > (a-3) GC reads users_counter_idx and saves to g_idx > (a-4) GC inverts users_counter_idx > (a-5) GC releases the removed element > (a-6) A reader increments users_counter[r_idx] > (a-7) A reader won't see the element removed by GC because > the reader has not started list traversal as of (a-2) > > (b-1) A reader reads users_counter_idx and saves to r_idx > (b-2) A reader increments users_counter[r_idx] > (b-3) GC removes element from the list using RCU > (b-4) A reader won't see the element removed by GC > (b-5) GC reads users_counter_idx and saves to g_idxo > (b-6) GC inverts users_counter_idx > (b-7) GC waits for users_counter[g_idx] to become 0 > (b-8) A reader decrements users_counter[r_idx] > (b-9) GC releases the removed element > > (c-1) A reader reads users_counter_idx and saves to r_idx > (c-2) A reader increments users_counter[r_idx] > (c-3) A reader sees the element > (c-4) GC removes element from the list using RCU > (c-5) GC reads users_counter_idx and saves to g_idx > (c-6) GC inverts users_counter_idx > (c-7) GC waits for users_counter[g_idx] to become 0 > (c-8) A reader decrements users_counter[r_idx] > (c-9) GC releases the removed element > > What I worry is that some memory barriers might be needed between > > > > { > > > /* Get counter index. */ > > > int idx = atomic_read(&users_counter_idx); > > > /* Lock counter. */ > > > atomic_inc(&users_counter[idx]); > - here - > > > list_for_each_entry_rcu() { > > > ... /* Allowed to sleep. */ > > > } > - here - > > > /* Unlock counter. */ > > > atomic_dec(&users_counter[idx]); > > > } > > and > > > > if (element_deleted) { > > > /* Swap active counter. */ > > > const int idx = atomic_read(&users_counter_idx); > - here - > > > atomic_set(&users_counter_idx, idx ^ 1); > - here - > > > /* > > > * Wait for readers who are using previously active counter. > > > * This is similar to synchronize_rcu() while this code allows > > > * readers to do operations which may sleep. > > > */ > > > while (atomic_read(&users_counter[idx])) > > > msleep(1000); > > > /* > > > * Nobody is using previously active counter. > > > * Ready to release memory of elements removed before > > > * previously active counter became inactive. > > > */ > > > kfree(element); > > > } > > in order to enforce ordering. Quite possibly. One of the advantages of using things like SRCU is that the necessary memory barriers are already in place. (knock wood) > > Also, if you have lots of concurrent readers, you can suffer high memory > > contention on the users_counter[] array, correct? > > Excuse me. I couldn't understand "memory contention"... > > ( http://www.answers.com/topic/memory-contention ) > | A situation in which two different programs, or two parts of a program, > | try to read items in the same block of memory at the same time. > Why suffered by atomic_read() at the same time? > Cache invalidation by atomic_inc()/atomic_dec() a shared variable? Yes, cache invalidation by atomic_inc()/atomic_dec() of a shared variable can definitly result in memory contention and extremely bad performance. > ( http://wiki.answers.com/Q/What_is_memory_contention ) > | Memory contention is a state a OS memory manager can reside in when to many > | memory requests (alloc, realloc, free) are issued to it from an active > | application possibly leading to a DOS condition specific to that > | application. > No memory allocation for users_counter[] array. This is not the type of memory contention I was thinking of. > > I recommend that you look into use of SRCU in this case. > > I have one worry regarding SRCU. > Inside synchronize_srcu(), there is a loop > > while (srcu_readers_active_idx(sp, idx)) > schedule_timeout_interruptible(1); > > but the reader's sleeping duration varies from less than one second to > more than hours. > > Checking for counters for every jiffies sounds too much waste of CPU. > Delaying kfree() for seconds or minutes won't cause troubles for TOMOYO. > It would be nice if checking interval is configurable like > "schedule_timeout_interruptible(sp->timeout);". This would not be a difficult change, and certainly would make sense in your case. I would be happy to review a patch from you, or to create a patch to SRCU if you would prefer. I would add a field as you say, and add an API element: void set_srcu_timeout(struct srcu_struct *sp, unsigned long timeout) The timeout would default to 1, and a value of 0 would be interpreted as 1. In your case, perhaps you would do the following after initializing the srcu_struct: set_srcu_timeout(&tomoyo_srcu, HZ); Would this work? > > Anyway, the general approach would be to make changes to your code > > roughly as follows: > > > > 1. replace your users_counter and users_counter_idx with a > > struct srcu_struct. > > > > 2. In the reader, replace the fetch from users_counter_idx and > > the atomic_inc() with srcu_read_lock(). > > > > 3. In the garbage collector, replace the fetch/update of > > users_counter_idx and the "while" loop with synchronize_srcu(). > > > I see. Since I isolated the GC as a dedicated kernel thread, writers no longer > wait for elements to be kfree()ed. I can use SRCU. Very good! > > Or is there some reason why SRCU does not work for you? > None for mainline version. > > I'm also maintaining TOMOYO for older/distributor kernels for those who want to > enable both SELinux/SMACK/AppArmor/grsecurity etc. and TOMOYO at the same time. > Thus, if my idea works, I want to backport it to TOMOYO for these kernels. SRCU has been in the kernel since 2.6.18, but would be easy to backport. If you have a recent git tree, run "gitk kernel/srcu.c include/linux/srcu.h". You will find four commits that are pretty well isolated -- you would need the changes to kernel/srcu.c, include/linux/srcu.h, and kernel/Makefile. Thanx, Paul ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] TOMOYO: Add garbage collector support. (v3) 2009-06-18 15:28 ` [PATCH] TOMOYO: Add garbage collector support. (v3) Paul E. McKenney @ 2009-06-19 4:57 ` Tetsuo Handa 2009-06-20 1:28 ` Paul E. McKenney 0 siblings, 1 reply; 16+ messages in thread From: Tetsuo Handa @ 2009-06-19 4:57 UTC (permalink / raw) To: paulmck; +Cc: linux-security-module, linux-kernel Hello. The GC thread is a loop of (1) Take gc_mutex (2) Remove an element from the list using RCU (3) Wait for readers without releasing gc_mutex (4) Free up that element (5) Release gc_mutex A new round will not see element which was removed by previous round. Paul E. McKenney wrote: > > > Consider the following sequence of events: > > > > > > o CPU 0 picks up users_counter_idx int local variable idx. > > > Let's assume that the value is zero. > > > > > > o CPU 0 is now preempted, interrupted, or otherwise delayed. > > > o CPU 1 takes gc_mutex. > > > o CPU 1 starts garbage collection, finding some elements to > > > delete, thus setting "element_deleted" to true. > > > > > > o CPU 1 continues garbage collection, inverting the value of > > > users_counter_idx, so that the value is now one, waiting > > > for the value-zero readers, and freeing up the old elements. > o CPU 1 releases gc_mutex. > [1] > > > > o CPU 0 continues execution, first atomically incrementing > > > users_counter[0], then traversing the list, possibly sleeping. > > > o CPU 2 takes gc_mutex. > > > o CPU 2 starts a new round of garbage collection, again finding > > > some elements to delete, and thus again setting > > > "elements_deleted" to true. One of the elements deleted > > > is the one that CPU 0 is currently referencing while asleep. > > > > > No. CPU 2 can't start a new round of GC because GC function is exclusively > > executed because of gc_mutex mutex. > > But CPU 1 would have released gc_mutex back at time [1], right? > Yes, CPU 1 will release gc_mutex after freeing up elements (which were removed from the list after gc_mutex was taken). If CPU 0 sleeps between "idx = atomic_read(&users_counter_idx)" and "atomic_inc(&users_counter[idx])", CPU 0 will not see the element removed by CPU 1 because CPU 0 has not started list traversal. Same result for CPU 0 sleeping between "atomic_inc(&users_counter[idx])" and "list_for_each_rcu() {". > > > o CPU 2 continues garbage collection, inverting the value of > > > users_counter_idx, so that the value is now zero, waiting > > > for the value-one readers, and freeing up the old elements. > > > Note that CPU 0 is a value-zero reader, so that CPU 2 will > > > not wait on it. > > > > > > CPU 2 therefore kfree()s the element that CPU 0 is currently > > > referencing. > > > > > CPU 2 won't continue GC, for CPU 2 can't start a new round of GC. > > I still don't see why CPU 0 would not have released gc_mutex back > at point [1]. > CPU 1 has released gc_mutex at point [1]. In that case, CPU 2 can take gc_mutex and start a new round. Nobody can start a new round before previous round finishes. CPU 2 can start a new round, but by that time, CPU 0 finished list traversal and atomically decremented users_counter[0] . CPU 1 won't finish a GC round before CPU 0 decrements users_counter[0], and thus CPU 2 won't start a new GC round before CPU 0 finishes list traversal. > > > o CPU 0 wakes up, and suffers possibly fatal disappointment upon > > > attempting to reference an element that has been freed -- and, > > > worse yet, possibly re-allocated as some other type of > > > structure. > > > > > CPU 0 won't suffer, for first round of GC (by CPU 1) prevents CPU 2 from > > starting a new round of GC. > > Why would CPU 1 be unable to complete its round of GC, thus releasing > gc_mutex, thus allowing CPU 2 from starting a new one? For that matter, > CPU 1 could start a new round, correct? > Because CPU 1 waits for CPU 0's atomic_dec() without releasing gc_mutex . > > > Or am I missing something in your pseudocode? > > I think you missed that GC function is executed exclusively. > > > > The race between readers and GC is avoided as below. > > If you can tell me why CPU 1 cannot release gc_mutex, I will look at > the following. Until then, I will stand by my scenario above. ;-) CPU 1 can release gc_mutex when that round finished (i.e. after freeing up elements removed by that round). > > > Also, if you have lots of concurrent readers, you can suffer high memory > > > contention on the users_counter[] array, correct? > > > > Excuse me. I couldn't understand "memory contention"... > > > > ( http://www.answers.com/topic/memory-contention ) > > | A situation in which two different programs, or two parts of a program, > > | try to read items in the same block of memory at the same time. > > Why suffered by atomic_read() at the same time? > > Cache invalidation by atomic_inc()/atomic_dec() a shared variable? > > Yes, cache invalidation by atomic_inc()/atomic_dec() of a shared > variable can definitly result in memory contention and extremely > bad performance. I have poor knowledge about hardware mechanisms. Would you please answer within you can? I experienced 8086 (not 80186 and later) assembly programming a bit on MS-DOS. My experience says that (I don't know actual cycle numbers) "mov eax, ebx" takes one CPU cycle. "mov eax, [ebx]" takes three CPU cycles. And it seems to me that modifying a shared variable doesn't affect performance. But if caching mechanisms exist, "mov eax, [ebx]" takes one CPU cycle if [ebx] is on cache, three CPU cycles if not on cache, is this correct? And modifying [ebx] invalidates cache, doesn't it? Then, atomic_t counter[NR_CPUS]; atomic_inc(&counter[smp_prosessor_id()]); shows better performance than atomic_t counter; atomic_inc(&counter); ? Does cache invalidation mechanism invalidate only sizeof(atomic_t) bytes (or invalidates more bytes nearby &counter)? Another keyword which is worrisome for me is NUMA. My understanding is that NUMA splits RAM into nodes and tries to use RAM in current node. In NUMA environment, (for example) "mov eax, [ebx]" takes three CPU cycles if ebx refers current node and hundred CPU cycles if ebx refers other node? Then, is it preferable to place copy of ACL information to every node rather than sharing one ACL information? > > > I recommend that you look into use of SRCU in this case. > > > > I have one worry regarding SRCU. > > Inside synchronize_srcu(), there is a loop > > > > while (srcu_readers_active_idx(sp, idx)) > > schedule_timeout_interruptible(1); > > > > but the reader's sleeping duration varies from less than one second to > > more than hours. > > > > Checking for counters for every jiffies sounds too much waste of CPU. > > Delaying kfree() for seconds or minutes won't cause troubles for TOMOYO. > > It would be nice if checking interval is configurable like > > "schedule_timeout_interruptible(sp->timeout);". > > This would not be a difficult change, and certainly would make sense in > your case. I would be happy to review a patch from you, or to create a > patch to SRCU if you would prefer. > > I would add a field as you say, and add an API element: > > void set_srcu_timeout(struct srcu_struct *sp, unsigned long timeout) > > The timeout would default to 1, and a value of 0 would be interpreted > as 1. In your case, perhaps you would do the following after initializing > the srcu_struct: > > set_srcu_timeout(&tomoyo_srcu, HZ); > > Would this work? Yes. May I? (I use "long" rather than "unsigned long" because schedule_timeout() rejects negative timeout value.) Regards. -------------------- Subject: [PATCH] SRCU: Allow longer timeout for non-urgent reclaimer. Currently synchronize_srcu() checks for readers for every jiffies. But if reader sleeps for long, we don't need to check so frequently. This patch allows non-urgent SRCU reclaimers (e.g. checking for every second is sufficient) to use longer timeout. Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> --- include/linux/srcu.h | 2 ++ kernel/srcu.c | 14 +++++++++++++- 2 files changed, 15 insertions(+), 1 deletion(-) --- security-testing-2.6.orig/include/linux/srcu.h +++ security-testing-2.6/include/linux/srcu.h @@ -35,6 +35,7 @@ struct srcu_struct { int completed; struct srcu_struct_array *per_cpu_ref; struct mutex mutex; + long timeout; }; #ifndef CONFIG_PREEMPT @@ -49,5 +50,6 @@ int srcu_read_lock(struct srcu_struct *s void srcu_read_unlock(struct srcu_struct *sp, int idx) __releases(sp); void synchronize_srcu(struct srcu_struct *sp); long srcu_batches_completed(struct srcu_struct *sp); +void set_srcu_timeout(struct srcu_struct *sp, long timeout); #endif --- security-testing-2.6.orig/kernel/srcu.c +++ security-testing-2.6/kernel/srcu.c @@ -44,6 +44,7 @@ */ int init_srcu_struct(struct srcu_struct *sp) { + sp->timeout = 1; sp->completed = 0; mutex_init(&sp->mutex); sp->per_cpu_ref = alloc_percpu(struct srcu_struct_array); @@ -201,7 +202,7 @@ void synchronize_srcu(struct srcu_struct */ while (srcu_readers_active_idx(sp, idx)) - schedule_timeout_interruptible(1); + schedule_timeout_interruptible(sp->timeout); synchronize_sched(); /* Force memory barrier on all CPUs. */ @@ -249,6 +250,17 @@ long srcu_batches_completed(struct srcu_ return sp->completed; } +/** + * set_srcu_timeout - set checking interval for synchronize_srcu() + * @sp: srcu_struct + * @timeout: checking interval in jiffies. + */ +void set_srcu_timeout(struct srcu_struct *sp, long timeout) +{ + if (timeout >= 1 && timeout != MAX_SCHEDULE_TIMEOUT) + sp->timeout = timeout; +} + EXPORT_SYMBOL_GPL(init_srcu_struct); EXPORT_SYMBOL_GPL(cleanup_srcu_struct); EXPORT_SYMBOL_GPL(srcu_read_lock); ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] TOMOYO: Add garbage collector support. (v3) 2009-06-19 4:57 ` Tetsuo Handa @ 2009-06-20 1:28 ` Paul E. McKenney 2009-06-20 7:04 ` Tetsuo Handa 0 siblings, 1 reply; 16+ messages in thread From: Paul E. McKenney @ 2009-06-20 1:28 UTC (permalink / raw) To: Tetsuo Handa; +Cc: linux-security-module, linux-kernel On Fri, Jun 19, 2009 at 01:57:46PM +0900, Tetsuo Handa wrote: > Hello. > > The GC thread is a loop of > > (1) Take gc_mutex > (2) Remove an element from the list using RCU > (3) Wait for readers without releasing gc_mutex > (4) Free up that element > (5) Release gc_mutex > > A new round will not see element which was removed by previous round. Understood. > Paul E. McKenney wrote: > > > > Consider the following sequence of events: > > > > > > > > o CPU 0 picks up users_counter_idx int local variable idx. > > > > Let's assume that the value is zero. > > > > > > > > o CPU 0 is now preempted, interrupted, or otherwise delayed. > > > > > o CPU 1 takes gc_mutex. Your (1). > > > > > o CPU 1 starts garbage collection, finding some elements to > > > > delete, thus setting "element_deleted" to true. Your (2). > > > > o CPU 1 continues garbage collection, inverting the value of > > > > users_counter_idx, so that the value is now one, waiting > > > > for the value-zero readers, and freeing up the old elements. Your (3) and (4). > o CPU 1 releases gc_mutex. > > [1] Your (5). > > > > o CPU 0 continues execution, first atomically incrementing > > > > users_counter[0], then traversing the list, possibly sleeping. Now the trick here is that CPU 0 has the old value of users_counter_idx. So the reader and the garbage collector now disagree on which interval they are operating in. And CPU 0 might now be holding an element that will be deleted by the next round of GC. > o CPU 2 takes gc_mutex. Your (1) again. Presumably your single kernel thread migrated from CPU 1 to CPU 2, which could really happen. > > > > o CPU 2 starts a new round of garbage collection, again finding > > > > some elements to delete, and thus again setting > > > > "elements_deleted" to true. One of the elements deleted > > > > is the one that CPU 0 is currently referencing while asleep. Your (2) again. > > > No. CPU 2 can't start a new round of GC because GC function is exclusively > > > executed because of gc_mutex mutex. > > > > But CPU 1 would have released gc_mutex back at time [1], right? > > > Yes, CPU 1 will release gc_mutex after freeing up elements (which were removed > from the list after gc_mutex was taken). > > If CPU 0 sleeps between "idx = atomic_read(&users_counter_idx)" and > "atomic_inc(&users_counter[idx])", CPU 0 will not see the element > removed by CPU 1 because CPU 0 has not started list traversal. > Same result for CPU 0 sleeping between "atomic_inc(&users_counter[idx])" > and "list_for_each_rcu() {". No, CPU 0 really did start list traversal three bullets ago. The problem is that the reader and gc disagree on what interval they are in. > > > > o CPU 2 continues garbage collection, inverting the value of > > > > users_counter_idx, so that the value is now zero, waiting > > > > for the value-one readers, and freeing up the old elements. > > > > Note that CPU 0 is a value-zero reader, so that CPU 2 will > > > > not wait on it. > > > > > > > > CPU 2 therefore kfree()s the element that CPU 0 is currently > > > > referencing. Your (3) and (4) again. Note that the reader has incremented users_counter[0], while the GC is waiting only for users_counter[1]. So the GC is not going to wait for the reader. > > > CPU 2 won't continue GC, for CPU 2 can't start a new round of GC. > > > > I still don't see why CPU 0 would not have released gc_mutex back > > at point [1]. > > > CPU 1 has released gc_mutex at point [1]. > In that case, CPU 2 can take gc_mutex and start a new round. > Nobody can start a new round before previous round finishes. > > CPU 2 can start a new round, but by that time, CPU 0 finished list traversal > and atomically decremented users_counter[0] . CPU 1 won't finish a GC round > before CPU 0 decrements users_counter[0], and thus CPU 2 won't start > a new GC round before CPU 0 finishes list traversal. No, because CPU 2 is waiting on users_counter[1] to reach zero, but the reader has incremented users_counter[0]. GC will thus -not- wait on the reader. > > > > o CPU 0 wakes up, and suffers possibly fatal disappointment upon > > > > attempting to reference an element that has been freed -- and, > > > > worse yet, possibly re-allocated as some other type of > > > > structure. > > > > > > > CPU 0 won't suffer, for first round of GC (by CPU 1) prevents CPU 2 from > > > starting a new round of GC. > > > > Why would CPU 1 be unable to complete its round of GC, thus releasing > > gc_mutex, thus allowing CPU 2 from starting a new one? For that matter, > > CPU 1 could start a new round, correct? > > > Because CPU 1 waits for CPU 0's atomic_dec() without releasing gc_mutex . But CPU 0 did not do its atomic_inc() until after CPU 1 got done waiting, so CPU 1 cannot possibly wait on CPU 0. CPU 2 cannot possibly wait on CPU 0, because they are using different elements of the users_counters[] array. > > > > Or am I missing something in your pseudocode? > > > I think you missed that GC function is executed exclusively. > > > > > > The race between readers and GC is avoided as below. > > > > If you can tell me why CPU 1 cannot release gc_mutex, I will look at > > the following. Until then, I will stand by my scenario above. ;-) > > CPU 1 can release gc_mutex when that round finished (i.e. after freeing up > elements removed by that round). Agreed, but I don't understand how this helps. > > > > Also, if you have lots of concurrent readers, you can suffer high memory > > > > contention on the users_counter[] array, correct? > > > > > > Excuse me. I couldn't understand "memory contention"... > > > > > > ( http://www.answers.com/topic/memory-contention ) > > > | A situation in which two different programs, or two parts of a program, > > > | try to read items in the same block of memory at the same time. > > > Why suffered by atomic_read() at the same time? > > > Cache invalidation by atomic_inc()/atomic_dec() a shared variable? > > > > Yes, cache invalidation by atomic_inc()/atomic_dec() of a shared > > variable can definitly result in memory contention and extremely > > bad performance. > > I have poor knowledge about hardware mechanisms. > Would you please answer within you can? > > I experienced 8086 (not 80186 and later) assembly programming a bit > on MS-DOS. My experience says that (I don't know actual cycle numbers) > "mov eax, ebx" takes one CPU cycle. > "mov eax, [ebx]" takes three CPU cycles. > And it seems to me that modifying a shared variable doesn't affect > performance. > But if caching mechanisms exist, "mov eax, [ebx]" takes one CPU cycle if > [ebx] is on cache, three CPU cycles if not on cache, is this correct? > And modifying [ebx] invalidates cache, doesn't it? > Then, > > atomic_t counter[NR_CPUS]; > atomic_inc(&counter[smp_prosessor_id()]); > > shows better performance than > > atomic_t counter; > atomic_inc(&counter); > > ? > Does cache invalidation mechanism invalidate only sizeof(atomic_t) bytes > (or invalidates more bytes nearby &counter)? Modern CPUs are quite complex. There is a multi-cycle penalty for the instruction being atomic in the first place, and there can be many tens or even hundreds of cycles penalty if the variable to be manipulated resides in some other CPU's cache. These penalties were larger in older SMP hardware. Also, in general, the larger the system, the worse the penalties. Getting data on and off a chip is quite expensive. See slide 11 of: http://www.rdrop.com/users/paulmck/scalability/paper/TMevalSlides.2008.10.19a.pdf for measurements on a few-years-old system. Newer multi-core systems are about a factor of six faster, but only if you keep everything on a single die. If you go to multiple sockets, there is still improvement, but only a factor of two or so in terms of clock period. > Another keyword which is worrisome for me is NUMA. > My understanding is that NUMA splits RAM into nodes and tries to use RAM > in current node. > In NUMA environment, (for example) "mov eax, [ebx]" takes three CPU cycles > if ebx refers current node and hundred CPU cycles if ebx refers other node? > Then, is it preferable to place copy of ACL information to every node > rather than sharing one ACL information? Even without NUMA, a load that misses all caches and comes from DRAM costs many tens or even a few hundred cycles. NUMA increases the pain, normally by a small multiple. The exact numbers will depend on the hardware, of course. > > > > I recommend that you look into use of SRCU in this case. > > > > > > I have one worry regarding SRCU. > > > Inside synchronize_srcu(), there is a loop > > > > > > while (srcu_readers_active_idx(sp, idx)) > > > schedule_timeout_interruptible(1); > > > > > > but the reader's sleeping duration varies from less than one second to > > > more than hours. > > > > > > Checking for counters for every jiffies sounds too much waste of CPU. > > > Delaying kfree() for seconds or minutes won't cause troubles for TOMOYO. > > > It would be nice if checking interval is configurable like > > > "schedule_timeout_interruptible(sp->timeout);". > > > > This would not be a difficult change, and certainly would make sense in > > your case. I would be happy to review a patch from you, or to create a > > patch to SRCU if you would prefer. > > > > I would add a field as you say, and add an API element: > > > > void set_srcu_timeout(struct srcu_struct *sp, unsigned long timeout) > > > > The timeout would default to 1, and a value of 0 would be interpreted > > as 1. In your case, perhaps you would do the following after initializing > > the srcu_struct: > > > > set_srcu_timeout(&tomoyo_srcu, HZ); > > > > Would this work? > > Yes. May I? Of course!!! > (I use "long" rather than "unsigned long" because schedule_timeout() rejects > negative timeout value.) > > Regards. > -------------------- > Subject: [PATCH] SRCU: Allow longer timeout for non-urgent reclaimer. > > Currently synchronize_srcu() checks for readers for every jiffies. > But if reader sleeps for long, we don't need to check so frequently. > > This patch allows non-urgent SRCU reclaimers (e.g. checking for every second > is sufficient) to use longer timeout. Looks good to me! Of course, if it turns out that you don't actually need it, then not much benefit in including it, but: Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> > Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> > --- > include/linux/srcu.h | 2 ++ > kernel/srcu.c | 14 +++++++++++++- > 2 files changed, 15 insertions(+), 1 deletion(-) > > --- security-testing-2.6.orig/include/linux/srcu.h > +++ security-testing-2.6/include/linux/srcu.h > @@ -35,6 +35,7 @@ struct srcu_struct { > int completed; > struct srcu_struct_array *per_cpu_ref; > struct mutex mutex; > + long timeout; > }; > > #ifndef CONFIG_PREEMPT > @@ -49,5 +50,6 @@ int srcu_read_lock(struct srcu_struct *s > void srcu_read_unlock(struct srcu_struct *sp, int idx) __releases(sp); > void synchronize_srcu(struct srcu_struct *sp); > long srcu_batches_completed(struct srcu_struct *sp); > +void set_srcu_timeout(struct srcu_struct *sp, long timeout); > > #endif > --- security-testing-2.6.orig/kernel/srcu.c > +++ security-testing-2.6/kernel/srcu.c > @@ -44,6 +44,7 @@ > */ > int init_srcu_struct(struct srcu_struct *sp) > { > + sp->timeout = 1; > sp->completed = 0; > mutex_init(&sp->mutex); > sp->per_cpu_ref = alloc_percpu(struct srcu_struct_array); > @@ -201,7 +202,7 @@ void synchronize_srcu(struct srcu_struct > */ > > while (srcu_readers_active_idx(sp, idx)) > - schedule_timeout_interruptible(1); > + schedule_timeout_interruptible(sp->timeout); > > synchronize_sched(); /* Force memory barrier on all CPUs. */ > > @@ -249,6 +250,17 @@ long srcu_batches_completed(struct srcu_ > return sp->completed; > } > > +/** > + * set_srcu_timeout - set checking interval for synchronize_srcu() > + * @sp: srcu_struct > + * @timeout: checking interval in jiffies. > + */ > +void set_srcu_timeout(struct srcu_struct *sp, long timeout) > +{ > + if (timeout >= 1 && timeout != MAX_SCHEDULE_TIMEOUT) > + sp->timeout = timeout; > +} > + > EXPORT_SYMBOL_GPL(init_srcu_struct); > EXPORT_SYMBOL_GPL(cleanup_srcu_struct); > EXPORT_SYMBOL_GPL(srcu_read_lock); > > ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] TOMOYO: Add garbage collector support. (v3) 2009-06-20 1:28 ` Paul E. McKenney @ 2009-06-20 7:04 ` Tetsuo Handa 2009-06-21 4:07 ` Paul E. McKenney 0 siblings, 1 reply; 16+ messages in thread From: Tetsuo Handa @ 2009-06-20 7:04 UTC (permalink / raw) To: paulmck; +Cc: linux-security-module, linux-kernel Hello. Paul E. McKenney wrote: > On Fri, Jun 19, 2009 at 01:57:46PM +0900, Tetsuo Handa wrote: > > Hello. > > > > The GC thread is a loop of > > > > (1) Take gc_mutex > > (2) Remove an element from the list using RCU > > (3) Wait for readers without releasing gc_mutex > > (4) Free up that element > > (5) Release gc_mutex > > > > A new round will not see element which was removed by previous round. > > Understood. > > > Paul E. McKenney wrote: > > > > > Consider the following sequence of events: > > > > > > > > > > o CPU 0 picks up users_counter_idx int local variable idx. > > > > > Let's assume that the value is zero. > > > > > > > > > > o CPU 0 is now preempted, interrupted, or otherwise delayed. > > > > > > > o CPU 1 takes gc_mutex. > > Your (1). > > > > > > > > o CPU 1 starts garbage collection, finding some elements to > > > > > delete, thus setting "element_deleted" to true. > > Your (2). > > > > > > o CPU 1 continues garbage collection, inverting the value of > > > > > users_counter_idx, so that the value is now one, waiting > > > > > for the value-zero readers, and freeing up the old elements. > > Your (3) and (4). > > > o CPU 1 releases gc_mutex. > > > [1] > > Your (5). > > > > > > o CPU 0 continues execution, first atomically incrementing > > > > > users_counter[0], then traversing the list, possibly sleeping. > > Now the trick here is that CPU 0 has the old value of users_counter_idx. > So the reader and the garbage collector now disagree on which interval > they are operating in. > > And CPU 0 might now be holding an element that will be deleted by the > next round of GC. > > > o CPU 2 takes gc_mutex. > > Your (1) again. Presumably your single kernel thread migrated from > CPU 1 to CPU 2, which could really happen. > > > > > > o CPU 2 starts a new round of garbage collection, again finding > > > > > some elements to delete, and thus again setting > > > > > "elements_deleted" to true. One of the elements deleted > > > > > is the one that CPU 0 is currently referencing while asleep. > > Your (2) again. > > > > > No. CPU 2 can't start a new round of GC because GC function is exclusively > > > > executed because of gc_mutex mutex. > > > > > > But CPU 1 would have released gc_mutex back at time [1], right? > > > > > Yes, CPU 1 will release gc_mutex after freeing up elements (which were removed > > from the list after gc_mutex was taken). > > > > If CPU 0 sleeps between "idx = atomic_read(&users_counter_idx)" and > > "atomic_inc(&users_counter[idx])", CPU 0 will not see the element > > removed by CPU 1 because CPU 0 has not started list traversal. > > Same result for CPU 0 sleeping between "atomic_inc(&users_counter[idx])" > > and "list_for_each_rcu() {". > > No, CPU 0 really did start list traversal three bullets ago. The > problem is that the reader and gc disagree on what interval they are in. > > > > > > o CPU 2 continues garbage collection, inverting the value of > > > > > users_counter_idx, so that the value is now zero, waiting > > > > > for the value-one readers, and freeing up the old elements. > > > > > Note that CPU 0 is a value-zero reader, so that CPU 2 will > > > > > not wait on it. > > > > > > > > > > CPU 2 therefore kfree()s the element that CPU 0 is currently > > > > > referencing. > > Your (3) and (4) again. Note that the reader has incremented > users_counter[0], while the GC is waiting only for users_counter[1]. > So the GC is not going to wait for the reader. > > > > > CPU 2 won't continue GC, for CPU 2 can't start a new round of GC. > > > > > > I still don't see why CPU 0 would not have released gc_mutex back > > > at point [1]. > > > > > CPU 1 has released gc_mutex at point [1]. > > In that case, CPU 2 can take gc_mutex and start a new round. > > Nobody can start a new round before previous round finishes. > > > > CPU 2 can start a new round, but by that time, CPU 0 finished list traversal > > and atomically decremented users_counter[0] . CPU 1 won't finish a GC round > > before CPU 0 decrements users_counter[0], and thus CPU 2 won't start > > a new GC round before CPU 0 finishes list traversal. > > No, because CPU 2 is waiting on users_counter[1] to reach zero, but > the reader has incremented users_counter[0]. GC will thus -not- wait > on the reader. > Ah, I understood. You are right. CPU 2 has to wait for not only users_counter[1] but also users_counter[0]. > Modern CPUs are quite complex. There is a multi-cycle penalty for the > instruction being atomic in the first place, and there can be many tens > or even hundreds of cycles penalty if the variable to be manipulated > resides in some other CPU's cache. > I thought atomic_t is a handy and lightweight counter. But atomic_t may cause big penalty. I see. > These penalties were larger in older SMP hardware. Also, in general, > the larger the system, the worse the penalties. Getting data on and off > a chip is quite expensive. See slide 11 of: > > http://www.rdrop.com/users/paulmck/scalability/paper/TMevalSlides.2008.10.19a.pdf > > for measurements on a few-years-old system. Newer multi-core systems > are about a factor of six faster, but only if you keep everything on a > single die. If you go to multiple sockets, there is still improvement, > but only a factor of two or so in terms of clock period. > Wow, what a large difference. > > Another keyword which is worrisome for me is NUMA. > > My understanding is that NUMA splits RAM into nodes and tries to use RAM > > in current node. > > In NUMA environment, (for example) "mov eax, [ebx]" takes three CPU cycles > > if ebx refers current node and hundred CPU cycles if ebx refers other node? > > Then, is it preferable to place copy of ACL information to every node > > rather than sharing one ACL information? > > Even without NUMA, a load that misses all caches and comes from DRAM > costs many tens or even a few hundred cycles. NUMA increases the pain, > normally by a small multiple. The exact numbers will depend on the > hardware, of course. > I see. NUMA's pain is smaller than I thought. I don't need to worry about NUMA for the foreseeable future. > > Subject: [PATCH] SRCU: Allow longer timeout for non-urgent reclaimer. > > > > Currently synchronize_srcu() checks for readers for every jiffies. > > But if reader sleeps for long, we don't need to check so frequently. > > > > This patch allows non-urgent SRCU reclaimers (e.g. checking for every second > > is sufficient) to use longer timeout. > > Looks good to me! Of course, if it turns out that you don't actually > need it, then not much benefit in including it, but: > > Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> I see. Regarding my environment (VMware on Core2Duo PC), it seems no problem because the GC thread does not appear on /usr/bin/top . But if somebody noticed (maybe embedded/realtime/huge systems), let's apply this. Thank you for everything. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] TOMOYO: Add garbage collector support. (v3) 2009-06-20 7:04 ` Tetsuo Handa @ 2009-06-21 4:07 ` Paul E. McKenney 0 siblings, 0 replies; 16+ messages in thread From: Paul E. McKenney @ 2009-06-21 4:07 UTC (permalink / raw) To: Tetsuo Handa; +Cc: linux-security-module, linux-kernel On Sat, Jun 20, 2009 at 04:04:43PM +0900, Tetsuo Handa wrote: > Hello. > > Paul E. McKenney wrote: > > On Fri, Jun 19, 2009 at 01:57:46PM +0900, Tetsuo Handa wrote: > > > Hello. > > > > > > The GC thread is a loop of > > > > > > (1) Take gc_mutex > > > (2) Remove an element from the list using RCU > > > (3) Wait for readers without releasing gc_mutex > > > (4) Free up that element > > > (5) Release gc_mutex > > > > > > A new round will not see element which was removed by previous round. > > > > Understood. > > > > > Paul E. McKenney wrote: > > > > > > Consider the following sequence of events: > > > > > > > > > > > > o CPU 0 picks up users_counter_idx int local variable idx. > > > > > > Let's assume that the value is zero. > > > > > > > > > > > > o CPU 0 is now preempted, interrupted, or otherwise delayed. > > > > > > > > > o CPU 1 takes gc_mutex. > > > > Your (1). > > > > > > > > > > > o CPU 1 starts garbage collection, finding some elements to > > > > > > delete, thus setting "element_deleted" to true. > > > > Your (2). > > > > > > > > o CPU 1 continues garbage collection, inverting the value of > > > > > > users_counter_idx, so that the value is now one, waiting > > > > > > for the value-zero readers, and freeing up the old elements. > > > > Your (3) and (4). > > > > > o CPU 1 releases gc_mutex. > > > > [1] > > > > Your (5). > > > > > > > > o CPU 0 continues execution, first atomically incrementing > > > > > > users_counter[0], then traversing the list, possibly sleeping. > > > > Now the trick here is that CPU 0 has the old value of users_counter_idx. > > So the reader and the garbage collector now disagree on which interval > > they are operating in. > > > > And CPU 0 might now be holding an element that will be deleted by the > > next round of GC. > > > > > o CPU 2 takes gc_mutex. > > > > Your (1) again. Presumably your single kernel thread migrated from > > CPU 1 to CPU 2, which could really happen. > > > > > > > > o CPU 2 starts a new round of garbage collection, again finding > > > > > > some elements to delete, and thus again setting > > > > > > "elements_deleted" to true. One of the elements deleted > > > > > > is the one that CPU 0 is currently referencing while asleep. > > > > Your (2) again. > > > > > > > No. CPU 2 can't start a new round of GC because GC function is exclusively > > > > > executed because of gc_mutex mutex. > > > > > > > > But CPU 1 would have released gc_mutex back at time [1], right? > > > > > > > Yes, CPU 1 will release gc_mutex after freeing up elements (which were removed > > > from the list after gc_mutex was taken). > > > > > > If CPU 0 sleeps between "idx = atomic_read(&users_counter_idx)" and > > > "atomic_inc(&users_counter[idx])", CPU 0 will not see the element > > > removed by CPU 1 because CPU 0 has not started list traversal. > > > Same result for CPU 0 sleeping between "atomic_inc(&users_counter[idx])" > > > and "list_for_each_rcu() {". > > > > No, CPU 0 really did start list traversal three bullets ago. The > > problem is that the reader and gc disagree on what interval they are in. > > > > > > > > o CPU 2 continues garbage collection, inverting the value of > > > > > > users_counter_idx, so that the value is now zero, waiting > > > > > > for the value-one readers, and freeing up the old elements. > > > > > > Note that CPU 0 is a value-zero reader, so that CPU 2 will > > > > > > not wait on it. > > > > > > > > > > > > CPU 2 therefore kfree()s the element that CPU 0 is currently > > > > > > referencing. > > > > Your (3) and (4) again. Note that the reader has incremented > > users_counter[0], while the GC is waiting only for users_counter[1]. > > So the GC is not going to wait for the reader. > > > > > > > CPU 2 won't continue GC, for CPU 2 can't start a new round of GC. > > > > > > > > I still don't see why CPU 0 would not have released gc_mutex back > > > > at point [1]. > > > > > > > CPU 1 has released gc_mutex at point [1]. > > > In that case, CPU 2 can take gc_mutex and start a new round. > > > Nobody can start a new round before previous round finishes. > > > > > > CPU 2 can start a new round, but by that time, CPU 0 finished list traversal > > > and atomically decremented users_counter[0] . CPU 1 won't finish a GC round > > > before CPU 0 decrements users_counter[0], and thus CPU 2 won't start > > > a new GC round before CPU 0 finishes list traversal. > > > > No, because CPU 2 is waiting on users_counter[1] to reach zero, but > > the reader has incremented users_counter[0]. GC will thus -not- wait > > on the reader. > > > Ah, I understood. You are right. > CPU 2 has to wait for not only users_counter[1] but also users_counter[0]. This sort of algorithm is indeed subtle and difficult to get right. Let's just say that I have made this same mistake in the past, as has everyone else that I know of who has tried something similar. ;-) > > Modern CPUs are quite complex. There is a multi-cycle penalty for the > > instruction being atomic in the first place, and there can be many tens > > or even hundreds of cycles penalty if the variable to be manipulated > > resides in some other CPU's cache. > > > I thought atomic_t is a handy and lightweight counter. But atomic_t may > cause big penalty. I see. The two exceptions are atomic_read() and atomic_write(), which, despite the names, do not involve atomic instructions. They are instead for type safety. > > These penalties were larger in older SMP hardware. Also, in general, > > the larger the system, the worse the penalties. Getting data on and off > > a chip is quite expensive. See slide 11 of: > > > > http://www.rdrop.com/users/paulmck/scalability/paper/TMevalSlides.2008.10.19a.pdf > > > > for measurements on a few-years-old system. Newer multi-core systems > > are about a factor of six faster, but only if you keep everything on a > > single die. If you go to multiple sockets, there is still improvement, > > but only a factor of two or so in terms of clock period. > > > Wow, what a large difference. Yeah, little problems with the finite speed of light, much less electrons. And the atomic nature of matter. > > > Another keyword which is worrisome for me is NUMA. > > > My understanding is that NUMA splits RAM into nodes and tries to use RAM > > > in current node. > > > In NUMA environment, (for example) "mov eax, [ebx]" takes three CPU cycles > > > if ebx refers current node and hundred CPU cycles if ebx refers other node? > > > Then, is it preferable to place copy of ACL information to every node > > > rather than sharing one ACL information? > > > > Even without NUMA, a load that misses all caches and comes from DRAM > > costs many tens or even a few hundred cycles. NUMA increases the pain, > > normally by a small multiple. The exact numbers will depend on the > > hardware, of course. > > > I see. NUMA's pain is smaller than I thought. > I don't need to worry about NUMA for the foreseeable future. Indeed, you usually only need to worry about NUMA after you have solved the SMP problems. > > > Subject: [PATCH] SRCU: Allow longer timeout for non-urgent reclaimer. > > > > > > Currently synchronize_srcu() checks for readers for every jiffies. > > > But if reader sleeps for long, we don't need to check so frequently. > > > > > > This patch allows non-urgent SRCU reclaimers (e.g. checking for every second > > > is sufficient) to use longer timeout. > > > > Looks good to me! Of course, if it turns out that you don't actually > > need it, then not much benefit in including it, but: > > > > Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> > > I see. Regarding my environment (VMware on Core2Duo PC), it seems no problem > because the GC thread does not appear on /usr/bin/top . > But if somebody noticed (maybe embedded/realtime/huge systems), > let's apply this. Fair enough!!! Thanx, Paul ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH] TOMOYO: Add garbage collector support. (v3) @ 2009-06-02 1:39 Tetsuo Handa 2009-06-02 1:57 ` Tetsuo Handa 0 siblings, 1 reply; 16+ messages in thread From: Tetsuo Handa @ 2009-06-02 1:39 UTC (permalink / raw) To: linux-security-module; +Cc: linux-kernel Hello. This patchset adds garbage collector support for TOMOYO. I replaced the cookie list approach with the refcounter approach. [PATCH 1/5] Move sleeping operations to outside the semaphore. [PATCH 2/5] Clarify lock protected section. [PATCH 3/5] Simplify policy reader. [PATCH 4/5] Replace tomoyo_save_name() with tomoyo_get_name()/tomoyo_put_name(). [PATCH 5/5] Add refcounter and garbage collector. These patches are made for security-testing-2.6#next with a commit b1338d199dda6681d9af0297928af0a7eb9cba7b (tomoyo: add missing call to cap_bprm_set_creds) applied. Regards. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH] TOMOYO: Add garbage collector support. (v3) 2009-06-02 1:39 Tetsuo Handa @ 2009-06-02 1:57 ` Tetsuo Handa 0 siblings, 0 replies; 16+ messages in thread From: Tetsuo Handa @ 2009-06-02 1:57 UTC (permalink / raw) To: linux-security-module; +Cc: linux-kernel I sent them manually, but the mailer didn't add "References:" and "In-Reply-To:" header. Please pick up from http://lkml.org/lkml/2009/6/1/479 http://lkml.org/lkml/2009/6/1/480 http://lkml.org/lkml/2009/6/1/481 http://lkml.org/lkml/2009/6/1/482 http://lkml.org/lkml/2009/6/1/486 Thanks. ^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2009-06-21 4:07 UTC | newest] Thread overview: 16+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2009-06-17 11:19 [PATCH] TOMOYO: Add garbage collector support. (v3) Tetsuo Handa 2009-06-17 11:21 ` [PATCH 1/3] TOMOYO: Move sleeping operations to outside the semaphore Tetsuo Handa 2009-06-17 11:22 ` [PATCH 2/3] TOMOYO: Replace tomoyo_save_name() with tomoyo_get_name()/tomoyo_put_name() Tetsuo Handa 2009-06-17 11:23 ` [PATCH 3/3] TOMOYO: Add RCU-like garbage collector Tetsuo Handa 2009-06-17 12:28 ` [PATCH] TOMOYO: Add garbage collector support. (v3) Peter Zijlstra 2009-06-17 16:31 ` Paul E. McKenney 2009-06-18 5:34 ` Tetsuo Handa 2009-06-18 6:45 ` [PATCH 3/3] TOMOYO: Add SRCU based garbage collector Tetsuo Handa 2009-06-18 16:05 ` Paul E. McKenney 2009-06-18 15:28 ` [PATCH] TOMOYO: Add garbage collector support. (v3) Paul E. McKenney 2009-06-19 4:57 ` Tetsuo Handa 2009-06-20 1:28 ` Paul E. McKenney 2009-06-20 7:04 ` Tetsuo Handa 2009-06-21 4:07 ` Paul E. McKenney -- strict thread matches above, loose matches on Subject: below -- 2009-06-02 1:39 Tetsuo Handa 2009-06-02 1:57 ` Tetsuo Handa
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox