BPF List
 help / color / mirror / Atom feed
From: Eduard Zingerman <eddyz87@gmail.com>
To: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>,
	Andrew Werner <awerner32@gmail.com>, bpf <bpf@vger.kernel.org>,
	Andrei Matei <andreimatei1@gmail.com>,
	Tamir Duberstein <tamird@gmail.com>,
	Joanne Koong <joannelkoong@gmail.com>,
	kernel-team@dataexmachina.dev, Song Liu <song@kernel.org>,
	Kumar Kartikeya Dwivedi <memxor@gmail.com>
Subject: Re: [BUG] verifier escape with iteration helpers (bpf_loop, ...)
Date: Tue, 19 Sep 2023 19:28:26 +0300	[thread overview]
Message-ID: <97a90da09404c65c8e810cf83c94ac703705dc0e.camel@gmail.com> (raw)
In-Reply-To: <a2995c1d7c01794ca9b652cdea7917cac5d98a16.camel@gmail.com>

It looks like I hit a related but slightly different bug with bpf_iter_next().
Consider the following example:

    SEC("fentry/" SYS_PREFIX "sys_nanosleep")
    int num_iter_bug(const void *ctx) {
        struct bpf_iter_num it;                 // fp[-8] below
        __u64 val = 0;                          // fp[-16] in the below
        __u64 *ptr = &val;                      // r7 below
        __u64 rnd = bpf_get_current_pid_tgid(); // r6 below
        void *v;
    
        bpf_iter_num_new(&it, 0, 10);
        while ((v = bpf_iter_num_next(&it))) {
            rnd++;
            if (rnd == 42) {
                ptr = (void*)(0xdead);
                continue;
            }
            bpf_probe_read_user(ptr, 8, (void*)(0xdeadbeef));
        }
        bpf_iter_num_destroy(&it);
        return 0;
    }

(Unfortunately, it had to be converted to assembly to avoid compiler
 clobbering loop structure, complete test case is at the end of the email).
 
The example is not safe because of 0xdead being a possible `ptr` value.
However, currently it is marked as safe.

This happens because of states_equal() usage for iterator convergence
detection:

    static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
        ...
    	while (sl)
    		states_cnt++;
    		if (sl->state.insn_idx != insn_idx)
    			goto next;
    
    		if (sl->state.branches)
                ...
    			if (is_iter_next_insn(env, insn_idx)) {
    				if (states_equal(env, &sl->state, cur)) {
                        ...
    					if (iter_state->iter.state == BPF_ITER_STATE_ACTIVE)
    						goto hit;
    				}
    				goto skip_inf_loop_check;
    			}
        ...

With some additional logging I see that the following states are
considered equal:

    13: (85) call bpf_iter_num_next#59908
    ...
    at is_iter_next_insn(insn 13):
      old state:
         R0=scalar() R1_rw=fp-8 R6_r=scalar(id=1) R7=fp-16 R10=fp0
         fp-8_r=iter_num(ref_id=2,state=active,depth=0) fp-16=00000000 refs=2
      cur state:
         R0=rdonly_mem(id=3,ref_obj_id=2,off=0,imm=0) R1_w=fp-8 R6=42 R7_w=57005
         R10=fp0 fp-8=iter_num(ref_id=2,state=active,depth=1) fp-16=00000000 refs=2
    states_equal()?: true

Note that R7=fp-16 in old state vs R7_w=57005 in cur state.
The registers are considered equal because R7 does not have a read mark.
However read marks are not yet finalized for old state because
sl->state.branches != 0. (Note: precision marks are not finalized as
well, which should be a problem, but this requires another example).

A possible fix is to add a special flag to states_equal() and
conditionally ignore logic related to liveness and precision when this
flag is set. Set this flag for is_iter_next_insn() branch above.

---

/* BTF FUNC records are not generated for kfuncs referenced
 * from inline assembly. These records are necessary for
 * libbpf to link the program. The function below is a hack
 * to ensure that BTF FUNC records are generated.
 */
void __kfunc_btf_root(void)
{
	bpf_iter_num_new(0, 0, 0);
	bpf_iter_num_next(0);
	bpf_iter_num_destroy(0);
}

SEC("fentry/" SYS_PREFIX "sys_nanosleep")
__naked int num_iter_bug(const void *ctx)
{
	asm volatile (
		// r7 = &fp[-16]
		// fp[-16] = 0
		"r7 = r10;"
		"r7 += -16;"
		"r0 = 0;"
		"*(u64 *)(r7 + 0) = r0;"
		// r6 = bpf_get_current_pid_tgid()
		"call %[bpf_get_current_pid_tgid];"
		"r6 = r0;"
		// bpf_iter_num_new(&fp[-8], 0, 10)
		"r1 = r10;"
		"r1 += -8;"
		"r2 = 0;"
		"r3 = 10;"
		"call %[bpf_iter_num_new];"
		// while (bpf_iter_num_next(&fp[-8])) {
		//   r6++
		//   if (r6 != 42) {
		//     r7 = 0xdead
		//     continue;
		//   }
		//   bpf_probe_read_user(r7, 8, 0xdeadbeef)
		// }
	"1:"
		"r1 = r10;"
		"r1 += -8;"
		"call %[bpf_iter_num_next];"
		"if r0 == 0 goto 2f;"
		"r6 += 1;"
		"if r6 != 42 goto 3f;"
		"r7 = 0xdead;"
		"goto 1b;"
	"3:"
		"r1 = r7;"
		"r2 = 8;"
		"r3 = 0xdeadbeef;"
		"call %[bpf_probe_read_user];"
		"goto 1b;"
	"2:"
		// bpf_iter_num_destroy(&fp[-8])
		"r1 = r10;"
		"r1 += -8;"
		"call %[bpf_iter_num_destroy];"
		// return 0
		"r0 = 0;"
		"exit;"
		:
		: __imm(bpf_get_current_pid_tgid),
		  __imm(bpf_iter_num_new),
		  __imm(bpf_iter_num_next),
		  __imm(bpf_iter_num_destroy),
		  __imm(bpf_probe_read_user)
		: __clobber_all
	);
}

  reply	other threads:[~2023-09-19 16:28 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-07 14:04 [BUG] verifier escape with iteration helpers (bpf_loop, ...) Andrew Werner
2023-07-07 16:44 ` Alexei Starovoitov
2023-07-07 18:08   ` Andrii Nakryiko
2023-07-07 18:21     ` Andrew Werner
2023-09-17 21:37     ` Eduard Zingerman
2023-09-17 22:09       ` Kumar Kartikeya Dwivedi
2023-09-18 13:06         ` Eduard Zingerman
2023-09-19 16:28           ` Eduard Zingerman [this message]
2023-09-19 23:02             ` Andrii Nakryiko
2023-09-20  0:19               ` Eduard Zingerman
2023-09-20 16:20                 ` Eduard Zingerman
2023-09-20 16:57                   ` Andrii Nakryiko
2023-09-21  9:14                 ` Alexei Starovoitov
2023-09-21 11:03                   ` Eduard Zingerman
2023-09-21 12:56                     ` Alexei Starovoitov
2023-09-21 16:23                       ` Eduard Zingerman
2023-09-21 16:35                         ` Andrii Nakryiko
2023-09-21 18:16                           ` Eduard Zingerman
2023-09-22  1:01                             ` Eduard Zingerman
2023-09-22  2:48                               ` Andrii Nakryiko
2023-09-22 18:36                                 ` Eduard Zingerman
2023-09-22 20:52                                   ` Andrii Nakryiko
2023-09-25  1:01                                     ` Eduard Zingerman
2023-09-26  0:33                                       ` Andrii Nakryiko
2023-09-26 15:55                                         ` Eduard Zingerman
2023-09-26 16:25                                           ` Andrii Nakryiko
2023-09-28  1:09                                             ` Eduard Zingerman
2023-09-28 18:30                                               ` Andrii Nakryiko
2023-10-02  3:26                                                 ` Eduard Zingerman
2023-09-30  0:41                                               ` Alexei Starovoitov
2023-10-02  1:40                                                 ` Eduard Zingerman
2023-10-02 16:29                                                   ` Alexei Starovoitov
2023-10-02 17:18                                                     ` Eduard Zingerman
2023-10-03  0:05                                                       ` Alexei Starovoitov
2023-10-03  2:00                                                         ` Alexei Starovoitov
2023-10-03 15:33                                                         ` Eduard Zingerman
2023-10-03 16:07                                                           ` Alexei Starovoitov
2023-10-03 18:50                                                           ` Alexei Starovoitov
2023-10-03 21:52                                                             ` Eduard Zingerman
2023-10-03 22:03                                                               ` Eduard Zingerman
2023-10-03 23:08                                                               ` Alexei Starovoitov
2023-10-03 23:14                                                                 ` Eduard Zingerman
2023-10-04  0:22                                                                   ` Andrii Nakryiko
2023-10-04  1:05                                                                 ` Eduard Zingerman
2023-10-04  2:57                                                                   ` Alexei Starovoitov
2023-10-04  5:50                                                                     ` Alexei Starovoitov
2023-10-04  9:49                                                                       ` Eduard Zingerman
2023-10-04 11:52                                                                     ` Eduard Zingerman
2023-09-19 23:14       ` Andrii Nakryiko
2023-09-20  0:06         ` Eduard Zingerman
2023-09-20 16:37           ` Andrii Nakryiko
2023-09-20 17:13             ` Eduard Zingerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=97a90da09404c65c8e810cf83c94ac703705dc0e.camel@gmail.com \
    --to=eddyz87@gmail.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=andreimatei1@gmail.com \
    --cc=andrii.nakryiko@gmail.com \
    --cc=awerner32@gmail.com \
    --cc=bpf@vger.kernel.org \
    --cc=joannelkoong@gmail.com \
    --cc=kernel-team@dataexmachina.dev \
    --cc=memxor@gmail.com \
    --cc=song@kernel.org \
    --cc=tamird@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox