From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Gleixner Subject: [patch V2 29/29] x86/stacktrace: Use common infrastructure Date: Thu, 18 Apr 2019 10:41:48 +0200 Message-ID: <20190418084255.740246383@linutronix.de> References: <20190418084119.056416939@linutronix.de> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" To: LKML Cc: Mike Snitzer , David Airlie , Catalin Marinas , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Christoph Hellwig , Alasdair Kergon , Marek Szyprowski , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, David Rientjes , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, Johannes Thumshirn , Akinobu Mita , Steven Rostedt , Josef Bacik , Mike Rapoport , Andy Lutomirski , Josh Poimboeuf , David Sterba List-Id: iommu@lists.linux-foundation.org UmVwbGFjZSB0aGUgc3RhY2tfdHJhY2Vfc2F2ZSooKSBmdW5jdGlvbnMgd2l0aCB0aGUgbmV3IGFy Y2hfc3RhY2tfd2FsaygpCmludGVyZmFjZXMuCgpTaWduZWQtb2ZmLWJ5OiBUaG9tYXMgR2xlaXhu ZXIgPHRnbHhAbGludXRyb25peC5kZT4KQ2M6IGxpbnV4LWFyY2hAdmdlci5rZXJuZWwub3JnCi0t LQogYXJjaC94ODYvS2NvbmZpZyAgICAgICAgICAgICB8ICAgIDEgCiBhcmNoL3g4Ni9rZXJuZWwv c3RhY2t0cmFjZS5jIHwgIDExNiArKysrKysrLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tCiAyIGZpbGVzIGNoYW5nZWQsIDIwIGluc2VydGlvbnMoKyksIDk3IGRlbGV0aW9ucygt KQoKLS0tIGEvYXJjaC94ODYvS2NvbmZpZworKysgYi9hcmNoL3g4Ni9LY29uZmlnCkBAIC03NCw2 ICs3NCw3IEBAIGNvbmZpZyBYODYKIAlzZWxlY3QgQVJDSF9NSUdIVF9IQVZFX0FDUElfUERDCQlp ZiBBQ1BJCiAJc2VsZWN0IEFSQ0hfTUlHSFRfSEFWRV9QQ19QQVJQT1JUCiAJc2VsZWN0IEFSQ0hf TUlHSFRfSEFWRV9QQ19TRVJJTworCXNlbGVjdCBBUkNIX1NUQUNLV0FMSwogCXNlbGVjdCBBUkNI X1NVUFBPUlRTX0FDUEkKIAlzZWxlY3QgQVJDSF9TVVBQT1JUU19BVE9NSUNfUk1XCiAJc2VsZWN0 IEFSQ0hfU1VQUE9SVFNfTlVNQV9CQUxBTkNJTkcJaWYgWDg2XzY0Ci0tLSBhL2FyY2gveDg2L2tl cm5lbC9zdGFja3RyYWNlLmMKKysrIGIvYXJjaC94ODYva2VybmVsL3N0YWNrdHJhY2UuYwpAQCAt MTIsNzUgKzEyLDMxIEBACiAjaW5jbHVkZSA8YXNtL3N0YWNrdHJhY2UuaD4KICNpbmNsdWRlIDxh c20vdW53aW5kLmg+CiAKLXN0YXRpYyBpbnQgc2F2ZV9zdGFja19hZGRyZXNzKHN0cnVjdCBzdGFj a190cmFjZSAqdHJhY2UsIHVuc2lnbmVkIGxvbmcgYWRkciwKLQkJCSAgICAgIGJvb2wgbm9zY2hl ZCkKLXsKLQlpZiAobm9zY2hlZCAmJiBpbl9zY2hlZF9mdW5jdGlvbnMoYWRkcikpCi0JCXJldHVy biAwOwotCi0JaWYgKHRyYWNlLT5za2lwID4gMCkgewotCQl0cmFjZS0+c2tpcC0tOwotCQlyZXR1 cm4gMDsKLQl9Ci0KLQlpZiAodHJhY2UtPm5yX2VudHJpZXMgPj0gdHJhY2UtPm1heF9lbnRyaWVz KQotCQlyZXR1cm4gLTE7Ci0KLQl0cmFjZS0+ZW50cmllc1t0cmFjZS0+bnJfZW50cmllcysrXSA9 IGFkZHI7Ci0JcmV0dXJuIDA7Ci19Ci0KLXN0YXRpYyB2b2lkIG5vaW5saW5lIF9fc2F2ZV9zdGFj a190cmFjZShzdHJ1Y3Qgc3RhY2tfdHJhY2UgKnRyYWNlLAotCQkJICAgICAgIHN0cnVjdCB0YXNr X3N0cnVjdCAqdGFzaywgc3RydWN0IHB0X3JlZ3MgKnJlZ3MsCi0JCQkgICAgICAgYm9vbCBub3Nj aGVkKQordm9pZCBhcmNoX3N0YWNrX3dhbGsoc3RhY2tfdHJhY2VfY29uc3VtZV9mbiBjb25zdW1l X2VudHJ5LCB2b2lkICpjb29raWUsCisJCSAgICAgc3RydWN0IHRhc2tfc3RydWN0ICp0YXNrLCBz dHJ1Y3QgcHRfcmVncyAqcmVncykKIHsKIAlzdHJ1Y3QgdW53aW5kX3N0YXRlIHN0YXRlOwogCXVu c2lnbmVkIGxvbmcgYWRkcjsKIAotCWlmIChyZWdzKQotCQlzYXZlX3N0YWNrX2FkZHJlc3ModHJh Y2UsIHJlZ3MtPmlwLCBub3NjaGVkKTsKKwlpZiAocmVncyAmJiAhY29uc3VtZV9lbnRyeShjb29r aWUsIHJlZ3MtPmlwLCBmYWxzZSkpCisJCXJldHVybjsKIAogCWZvciAodW53aW5kX3N0YXJ0KCZz dGF0ZSwgdGFzaywgcmVncywgTlVMTCk7ICF1bndpbmRfZG9uZSgmc3RhdGUpOwogCSAgICAgdW53 aW5kX25leHRfZnJhbWUoJnN0YXRlKSkgewogCQlhZGRyID0gdW53aW5kX2dldF9yZXR1cm5fYWRk cmVzcygmc3RhdGUpOwotCQlpZiAoIWFkZHIgfHwgc2F2ZV9zdGFja19hZGRyZXNzKHRyYWNlLCBh ZGRyLCBub3NjaGVkKSkKKwkJaWYgKCFhZGRyIHx8ICFjb25zdW1lX2VudHJ5KGNvb2tpZSwgYWRk ciwgZmFsc2UpKQogCQkJYnJlYWs7CiAJfQogfQogCiAvKgotICogU2F2ZSBzdGFjay1iYWNrdHJh Y2UgYWRkcmVzc2VzIGludG8gYSBzdGFja190cmFjZSBidWZmZXIuCisgKiBUaGlzIGZ1bmN0aW9u IHJldHVybnMgYW4gZXJyb3IgaWYgaXQgZGV0ZWN0cyBhbnkgdW5yZWxpYWJsZSBmZWF0dXJlcyBv ZiB0aGUKKyAqIHN0YWNrLiAgT3RoZXJ3aXNlIGl0IGd1YXJhbnRlZXMgdGhhdCB0aGUgc3RhY2sg dHJhY2UgaXMgcmVsaWFibGUuCisgKgorICogSWYgdGhlIHRhc2sgaXMgbm90ICdjdXJyZW50Jywg dGhlIGNhbGxlciAqbXVzdCogZW5zdXJlIHRoZSB0YXNrIGlzIGluYWN0aXZlLgogICovCi12b2lk IHNhdmVfc3RhY2tfdHJhY2Uoc3RydWN0IHN0YWNrX3RyYWNlICp0cmFjZSkKLXsKLQl0cmFjZS0+ c2tpcCsrOwotCV9fc2F2ZV9zdGFja190cmFjZSh0cmFjZSwgY3VycmVudCwgTlVMTCwgZmFsc2Up OwotfQotRVhQT1JUX1NZTUJPTF9HUEwoc2F2ZV9zdGFja190cmFjZSk7Ci0KLXZvaWQgc2F2ZV9z dGFja190cmFjZV9yZWdzKHN0cnVjdCBwdF9yZWdzICpyZWdzLCBzdHJ1Y3Qgc3RhY2tfdHJhY2Ug KnRyYWNlKQotewotCV9fc2F2ZV9zdGFja190cmFjZSh0cmFjZSwgY3VycmVudCwgcmVncywgZmFs c2UpOwotfQotCi12b2lkIHNhdmVfc3RhY2tfdHJhY2VfdHNrKHN0cnVjdCB0YXNrX3N0cnVjdCAq dHNrLCBzdHJ1Y3Qgc3RhY2tfdHJhY2UgKnRyYWNlKQotewotCWlmICghdHJ5X2dldF90YXNrX3N0 YWNrKHRzaykpCi0JCXJldHVybjsKLQotCWlmICh0c2sgPT0gY3VycmVudCkKLQkJdHJhY2UtPnNr aXArKzsKLQlfX3NhdmVfc3RhY2tfdHJhY2UodHJhY2UsIHRzaywgTlVMTCwgdHJ1ZSk7Ci0KLQlw dXRfdGFza19zdGFjayh0c2spOwotfQotRVhQT1JUX1NZTUJPTF9HUEwoc2F2ZV9zdGFja190cmFj ZV90c2spOwotCi0jaWZkZWYgQ09ORklHX0hBVkVfUkVMSUFCTEVfU1RBQ0tUUkFDRQotCi1zdGF0 aWMgaW50IF9fYWx3YXlzX2lubGluZQotX19zYXZlX3N0YWNrX3RyYWNlX3JlbGlhYmxlKHN0cnVj dCBzdGFja190cmFjZSAqdHJhY2UsCi0JCQkgICAgc3RydWN0IHRhc2tfc3RydWN0ICp0YXNrKQor aW50IGFyY2hfc3RhY2tfd2Fsa19yZWxpYWJsZShzdGFja190cmFjZV9jb25zdW1lX2ZuIGNvbnN1 bWVfZW50cnksCisJCQkgICAgIHZvaWQgKmNvb2tpZSwgc3RydWN0IHRhc2tfc3RydWN0ICp0YXNr KQogewogCXN0cnVjdCB1bndpbmRfc3RhdGUgc3RhdGU7CiAJc3RydWN0IHB0X3JlZ3MgKnJlZ3M7 CkBAIC0xMTcsNyArNzMsNyBAQCBzdGF0aWMgaW50IF9fYWx3YXlzX2lubGluZQogCQlpZiAoIWFk ZHIpCiAJCQlyZXR1cm4gLUVJTlZBTDsKIAotCQlpZiAoc2F2ZV9zdGFja19hZGRyZXNzKHRyYWNl LCBhZGRyLCBmYWxzZSkpCisJCWlmICghY29uc3VtZV9lbnRyeShjb29raWUsIGFkZHIsIGZhbHNl KSkKIAkJCXJldHVybiAtRUlOVkFMOwogCX0KIApAQCAtMTMyLDMyICs4OCw2IEBAIHN0YXRpYyBp bnQgX19hbHdheXNfaW5saW5lCiAJcmV0dXJuIDA7CiB9CiAKLS8qCi0gKiBUaGlzIGZ1bmN0aW9u IHJldHVybnMgYW4gZXJyb3IgaWYgaXQgZGV0ZWN0cyBhbnkgdW5yZWxpYWJsZSBmZWF0dXJlcyBv ZiB0aGUKLSAqIHN0YWNrLiAgT3RoZXJ3aXNlIGl0IGd1YXJhbnRlZXMgdGhhdCB0aGUgc3RhY2sg dHJhY2UgaXMgcmVsaWFibGUuCi0gKgotICogSWYgdGhlIHRhc2sgaXMgbm90ICdjdXJyZW50Jywg dGhlIGNhbGxlciAqbXVzdCogZW5zdXJlIHRoZSB0YXNrIGlzIGluYWN0aXZlLgotICovCi1pbnQg c2F2ZV9zdGFja190cmFjZV90c2tfcmVsaWFibGUoc3RydWN0IHRhc2tfc3RydWN0ICp0c2ssCi0J CQkJICBzdHJ1Y3Qgc3RhY2tfdHJhY2UgKnRyYWNlKQotewotCWludCByZXQ7Ci0KLQkvKgotCSAq IElmIHRoZSB0YXNrIGRvZXNuJ3QgaGF2ZSBhIHN0YWNrIChlLmcuLCBhIHpvbWJpZSksIHRoZSBz dGFjayBpcwotCSAqICJyZWxpYWJseSIgZW1wdHkuCi0JICovCi0JaWYgKCF0cnlfZ2V0X3Rhc2tf c3RhY2sodHNrKSkKLQkJcmV0dXJuIDA7Ci0KLQlyZXQgPSBfX3NhdmVfc3RhY2tfdHJhY2VfcmVs aWFibGUodHJhY2UsIHRzayk7Ci0KLQlwdXRfdGFza19zdGFjayh0c2spOwotCi0JcmV0dXJuIHJl dDsKLX0KLSNlbmRpZiAvKiBDT05GSUdfSEFWRV9SRUxJQUJMRV9TVEFDS1RSQUNFICovCi0KIC8q IFVzZXJzcGFjZSBzdGFja3RyYWNlIC0gYmFzZWQgb24ga2VybmVsL3RyYWNlL3RyYWNlX3N5c3By b2YuYyAqLwogCiBzdHJ1Y3Qgc3RhY2tfZnJhbWVfdXNlciB7CkBAIC0xODIsMTUgKzExMiwxNSBA QCBjb3B5X3N0YWNrX2ZyYW1lKGNvbnN0IHZvaWQgX191c2VyICpmcCwKIAlyZXR1cm4gcmV0Owog fQogCi1zdGF0aWMgaW5saW5lIHZvaWQgX19zYXZlX3N0YWNrX3RyYWNlX3VzZXIoc3RydWN0IHN0 YWNrX3RyYWNlICp0cmFjZSkKK3ZvaWQgYXJjaF9zdGFja193YWxrX3VzZXIoc3RhY2tfdHJhY2Vf Y29uc3VtZV9mbiBjb25zdW1lX2VudHJ5LCB2b2lkICpjb29raWUsCisJCQkgIGNvbnN0IHN0cnVj dCBwdF9yZWdzICpyZWdzKQogewotCWNvbnN0IHN0cnVjdCBwdF9yZWdzICpyZWdzID0gdGFza19w dF9yZWdzKGN1cnJlbnQpOwogCWNvbnN0IHZvaWQgX191c2VyICpmcCA9IChjb25zdCB2b2lkIF9f dXNlciAqKXJlZ3MtPmJwOwogCi0JaWYgKHRyYWNlLT5ucl9lbnRyaWVzIDwgdHJhY2UtPm1heF9l bnRyaWVzKQotCQl0cmFjZS0+ZW50cmllc1t0cmFjZS0+bnJfZW50cmllcysrXSA9IHJlZ3MtPmlw OworCWlmICghY29uc3VtZV9lbnRyeShjb29raWUsIHJlZ3MtPmlwLCBmYWxzZSkpCisJCXJldHVy bjsKIAotCXdoaWxlICh0cmFjZS0+bnJfZW50cmllcyA8IHRyYWNlLT5tYXhfZW50cmllcykgewor CXdoaWxlICgxKSB7CiAJCXN0cnVjdCBzdGFja19mcmFtZV91c2VyIGZyYW1lOwogCiAJCWZyYW1l Lm5leHRfZnAgPSBOVUxMOwpAQCAtMjAwLDggKzEzMCw4IEBAIHN0YXRpYyBpbmxpbmUgdm9pZCBf X3NhdmVfc3RhY2tfdHJhY2VfdXMKIAkJaWYgKCh1bnNpZ25lZCBsb25nKWZwIDwgcmVncy0+c3Ap CiAJCQlicmVhazsKIAkJaWYgKGZyYW1lLnJldF9hZGRyKSB7Ci0JCQl0cmFjZS0+ZW50cmllc1t0 cmFjZS0+bnJfZW50cmllcysrXSA9Ci0JCQkJZnJhbWUucmV0X2FkZHI7CisJCQlpZiAoIWNvbnN1 bWVfZW50cnkoY29va2llLCBmcmFtZS5yZXRfYWRkciwgZmFsc2UpKQorCQkJCXJldHVybjsKIAkJ fQogCQlpZiAoZnAgPT0gZnJhbWUubmV4dF9mcCkKIAkJCWJyZWFrOwpAQCAtMjA5LDExICsxMzks MyBAQCBzdGF0aWMgaW5saW5lIHZvaWQgX19zYXZlX3N0YWNrX3RyYWNlX3VzCiAJfQogfQogCi12 b2lkIHNhdmVfc3RhY2tfdHJhY2VfdXNlcihzdHJ1Y3Qgc3RhY2tfdHJhY2UgKnRyYWNlKQotewot CS8qCi0JICogVHJhY2UgdXNlciBzdGFjayBpZiB3ZSBhcmUgbm90IGEga2VybmVsIHRocmVhZAot CSAqLwotCWlmIChjdXJyZW50LT5tbSkKLQkJX19zYXZlX3N0YWNrX3RyYWNlX3VzZXIodHJhY2Up OwotfQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCklu dGVsLWdmeCBtYWlsaW5nIGxpc3QKSW50ZWwtZ2Z4QGxpc3RzLmZyZWVkZXNrdG9wLm9yZwpodHRw czovL2xpc3RzLmZyZWVkZXNrdG9wLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2ludGVsLWdmeA== From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8702C10F0E for ; Thu, 18 Apr 2019 09:33:58 +0000 (UTC) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C11DD214DA for ; Thu, 18 Apr 2019 09:33:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C11DD214DA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linutronix.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id A537614CF; Thu, 18 Apr 2019 09:33:58 +0000 (UTC) Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 95912115D for ; Thu, 18 Apr 2019 09:33:57 +0000 (UTC) X-Greylist: greylisting inactive for cl@linux.com in SQLgrey-1.7.6 X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from Galois.linutronix.de (Galois.linutronix.de [146.0.238.70]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id F1349466; Thu, 18 Apr 2019 09:33:56 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1hH30Z-00021L-Q0; Thu, 18 Apr 2019 11:07:04 +0200 Message-Id: <20190418084255.740246383@linutronix.de> User-Agent: quilt/0.65 Date: Thu, 18 Apr 2019 10:41:48 +0200 From: Thomas Gleixner To: LKML Subject: [patch V2 29/29] x86/stacktrace: Use common infrastructure References: <20190418084119.056416939@linutronix.de> MIME-Version: 1.0 Cc: Mike Snitzer , David Airlie , Catalin Marinas , Joonas Lahtinen , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, dm-devel@redhat.com, Alexander Potapenko , Christoph Lameter , Christoph Hellwig , Alasdair Kergon , linux-arch@vger.kernel.org, x86@kernel.org, kasan-dev@googlegroups.com, David Rientjes , Andrey Ryabinin , Alexey Dobriyan , intel-gfx@lists.freedesktop.org, Johannes Thumshirn , Maarten Lankhorst , Akinobu Mita , Steven Rostedt , Josef Bacik , Rodrigo Vivi , Mike Rapoport , Jani Nikula , Andy Lutomirski , Josh Poimboeuf , David Sterba , Dmitry Vyukov , Chris Mason , Pekka Enberg , iommu@lists.linux-foundation.org, Daniel Vetter , Andrew Morton , Robin Murphy , linux-btrfs@vger.kernel.org X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Sender: iommu-bounces@lists.linux-foundation.org Errors-To: iommu-bounces@lists.linux-foundation.org Message-ID: <20190418084148.X8edy4N6kMnzIZkA6mlz6BKwrg9CPC8d3VETkYbddI8@z> Replace the stack_trace_save*() functions with the new arch_stack_walk() interfaces. Signed-off-by: Thomas Gleixner Cc: linux-arch@vger.kernel.org --- arch/x86/Kconfig | 1 arch/x86/kernel/stacktrace.c | 116 +++++++------------------------------------ 2 files changed, 20 insertions(+), 97 deletions(-) --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -74,6 +74,7 @@ config X86 select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_SERIO + select ARCH_STACKWALK select ARCH_SUPPORTS_ACPI select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_NUMA_BALANCING if X86_64 --- a/arch/x86/kernel/stacktrace.c +++ b/arch/x86/kernel/stacktrace.c @@ -12,75 +12,31 @@ #include #include -static int save_stack_address(struct stack_trace *trace, unsigned long addr, - bool nosched) -{ - if (nosched && in_sched_functions(addr)) - return 0; - - if (trace->skip > 0) { - trace->skip--; - return 0; - } - - if (trace->nr_entries >= trace->max_entries) - return -1; - - trace->entries[trace->nr_entries++] = addr; - return 0; -} - -static void noinline __save_stack_trace(struct stack_trace *trace, - struct task_struct *task, struct pt_regs *regs, - bool nosched) +void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, + struct task_struct *task, struct pt_regs *regs) { struct unwind_state state; unsigned long addr; - if (regs) - save_stack_address(trace, regs->ip, nosched); + if (regs && !consume_entry(cookie, regs->ip, false)) + return; for (unwind_start(&state, task, regs, NULL); !unwind_done(&state); unwind_next_frame(&state)) { addr = unwind_get_return_address(&state); - if (!addr || save_stack_address(trace, addr, nosched)) + if (!addr || !consume_entry(cookie, addr, false)) break; } } /* - * Save stack-backtrace addresses into a stack_trace buffer. + * This function returns an error if it detects any unreliable features of the + * stack. Otherwise it guarantees that the stack trace is reliable. + * + * If the task is not 'current', the caller *must* ensure the task is inactive. */ -void save_stack_trace(struct stack_trace *trace) -{ - trace->skip++; - __save_stack_trace(trace, current, NULL, false); -} -EXPORT_SYMBOL_GPL(save_stack_trace); - -void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace) -{ - __save_stack_trace(trace, current, regs, false); -} - -void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) -{ - if (!try_get_task_stack(tsk)) - return; - - if (tsk == current) - trace->skip++; - __save_stack_trace(trace, tsk, NULL, true); - - put_task_stack(tsk); -} -EXPORT_SYMBOL_GPL(save_stack_trace_tsk); - -#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE - -static int __always_inline -__save_stack_trace_reliable(struct stack_trace *trace, - struct task_struct *task) +int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, + void *cookie, struct task_struct *task) { struct unwind_state state; struct pt_regs *regs; @@ -117,7 +73,7 @@ static int __always_inline if (!addr) return -EINVAL; - if (save_stack_address(trace, addr, false)) + if (!consume_entry(cookie, addr, false)) return -EINVAL; } @@ -132,32 +88,6 @@ static int __always_inline return 0; } -/* - * This function returns an error if it detects any unreliable features of the - * stack. Otherwise it guarantees that the stack trace is reliable. - * - * If the task is not 'current', the caller *must* ensure the task is inactive. - */ -int save_stack_trace_tsk_reliable(struct task_struct *tsk, - struct stack_trace *trace) -{ - int ret; - - /* - * If the task doesn't have a stack (e.g., a zombie), the stack is - * "reliably" empty. - */ - if (!try_get_task_stack(tsk)) - return 0; - - ret = __save_stack_trace_reliable(trace, tsk); - - put_task_stack(tsk); - - return ret; -} -#endif /* CONFIG_HAVE_RELIABLE_STACKTRACE */ - /* Userspace stacktrace - based on kernel/trace/trace_sysprof.c */ struct stack_frame_user { @@ -182,15 +112,15 @@ copy_stack_frame(const void __user *fp, return ret; } -static inline void __save_stack_trace_user(struct stack_trace *trace) +void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie, + const struct pt_regs *regs) { - const struct pt_regs *regs = task_pt_regs(current); const void __user *fp = (const void __user *)regs->bp; - if (trace->nr_entries < trace->max_entries) - trace->entries[trace->nr_entries++] = regs->ip; + if (!consume_entry(cookie, regs->ip, false)) + return; - while (trace->nr_entries < trace->max_entries) { + while (1) { struct stack_frame_user frame; frame.next_fp = NULL; @@ -200,8 +130,8 @@ static inline void __save_stack_trace_us if ((unsigned long)fp < regs->sp) break; if (frame.ret_addr) { - trace->entries[trace->nr_entries++] = - frame.ret_addr; + if (!consume_entry(cookie, frame.ret_addr, false)) + return; } if (fp == frame.next_fp) break; @@ -209,11 +139,3 @@ static inline void __save_stack_trace_us } } -void save_stack_trace_user(struct stack_trace *trace) -{ - /* - * Trace user stack if we are not a kernel thread - */ - if (current->mm) - __save_stack_trace_user(trace); -} _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu