CEPH filesystem development
 help / color / mirror / Atom feed
* (unknown)
@ 2015-03-09 16:48 Joshua Schmid
  0 siblings, 0 replies; 71+ messages in thread
From: Joshua Schmid @ 2015-03-09 16:48 UTC (permalink / raw)
  To: ceph-devel

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

subscribe ceph-devel
- -- 
Freundliche Grüße - Kind regards,
Joshua Schmid
Trainee - Storage SAP HANA
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nürnberg
-
--------------------------------------------------------------------------------------------------------------------
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard,
Jennifer Guild, Dilip Upmanyu, Graham Norton, HRB 21284 (AG Nürnberg)
-
--------------------------------------------------------------------------------------------------------------------
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBAgAGBQJU/c73AAoJEPUnwXO0u5uWxbcP/Rd3Ru9DWHfqBF965/JH7DL8
PQXlrJwPRftFO1guZDEPTAAC8rl0gdDv903Di0R7O0ujbZP/hvKJYCgxzMvpvqWe
opNEkLSKFfsKZgXGNKjuw3IKbpBUnX8EyvxA0NKnmLo/IJ417W6E2GOeO/dd5hgj
xALhatwqeqntVnr1NGmbg+bKejV1Y0iIG6bJ0t9UW1Yx7soAvHmvElg2lzDnA9kb
RkNQIT8PVojnjUfZsKZhgZtHdBKU00qurpojVBVJ+sM1jeZDDB3J5VtTKWDeZPLI
B0WoEHw5b/BNEZGbzEdBsLhJeV6gMKkMM/KKGKQZTZ99FnITt2f2NwlMVlcS3BDn
vSxpvnXgNehhMlm4TxCOVPDZGBFU5Np+R7pgCOi27JDcZm6MnPpAp+e+oI4HRVsj
jLxskhS2srbtXy1w9SHKyNNV4feJ9Pki4q+SKnm8sNbmbyYyTCnCqoB7Ed30GNIT
Xud+K36rIfgRjWz82PC+MkBWPhHmPznwcxd5TdyquGsNKK/XqgdR+Deq3UZ3i5P4
o3dz3J1HN9cw7dIzzXRFPUVXFwjGI3TL7PPLONyFhMAGxK4EGF7VkR4YJFwZHNrH
l2b9Ib2ZbRpe0/Lx/vPCPZsy1x7aDYPh4sOSTevlpB6hiZC2AzGeEDSb7Cjuotfc
h87d2OBSIaRnRL1e2rSl
=D1QL
-----END PGP SIGNATURE-----
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown)
@ 2020-07-22  4:45 Darlehen Bedienung
  0 siblings, 0 replies; 71+ messages in thread
From: Darlehen Bedienung @ 2020-07-22  4:45 UTC (permalink / raw)




Schönen Tag,Wir sind zuverlässige, vertrauenswürdige Kreditgeber, Wir bieten Darlehen an Unternehmen und Privatpersonen zu niedrigen und günstigen Zinssatz von 2%. Sind Sie auf der Suche nach einem Business-Darlehen, persönliche Darlehen, Schuldenkonsolidierung, unbesicherte Darlehen, Venture Capital. Kontaktieren Sie uns mit Name, Land, Darlehensbetrag, Dauer und Telefonnummer.GrüßeHerr DA COSTA DARREN FAY

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown)
@ 2020-06-27 21:58 lookman joe
  0 siblings, 0 replies; 71+ messages in thread
From: lookman joe @ 2020-06-27 21:58 UTC (permalink / raw)


MONEY-GRAM TRANSFERRED PAYMENT INFO:

Below is the sender’s information



1. MG. REFERENCE NO#: 36360857

2. SENDER'S NAME: Johnson Williams

3. AMOUNT TO PICKUP: US$10,000



Go to any Money Gram office near you and pick up the payment Track the

Reference Number by visiting and click the link below

(https://secure.moneygram.com/embed/track) and enter the Reference

Number: 36360857 and the Last Name: Williams, you will find the payment

available for pickup instantly.

Yours Sincerely,

Mrs. Helen Marvis
United Nations Liaison Office
Directorate for International Payments

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown)
@ 2020-03-27  9:20 chenanqing
  0 siblings, 0 replies; 71+ messages in thread
From: chenanqing @ 2020-03-27  9:20 UTC (permalink / raw)
  To: chenanqing, linux-kernel, linux-scsi, open-iscsi, ceph-devel,
	martin.petersen, jejb, cleech, lduncan

From: Chen Anqing <chenanqing@oppo.com>
To: Lee Duncan <lduncan@suse.com>
Cc: Chris Leech <cleech@redhat.com>,
        "James E . J . Bottomley" <jejb@linux.ibm.com>,
        "Martin K . Petersen" <martin.petersen@oracle.com>,
        ceph-devel@vger.kernel.org,
        open-iscsi@googlegroups.com,
        linux-scsi@vger.kernel.org,
        linux-kernel@vger.kernel.org,
        chenanqing@oppo.com
Subject: [PATCH] scsi: libiscsi: we should take compound page into account also
Date: Fri, 27 Mar 2020 05:20:01 -0400
Message-Id: <20200327092001.56879-1-chenanqing@oppo.com>
X-Mailer: git-send-email 2.18.2

the patch is occur at a real crash,which slab is
come from a compound page,so we need take the compound page
into account also.
fixed commit 08b11eaccfcf ("scsi: libiscsi: fall back to
sendmsg for slab pages").

Signed-off-by: Chen Anqing <chenanqing@oppo.com>
---
 drivers/scsi/libiscsi_tcp.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/libiscsi_tcp.c b/drivers/scsi/libiscsi_tcp.c
index 6ef93c7af954..98304e5e1f6f 100644
--- a/drivers/scsi/libiscsi_tcp.c
+++ b/drivers/scsi/libiscsi_tcp.c
@@ -128,7 +128,8 @@ static void iscsi_tcp_segment_map(struct iscsi_segment *segment, int recv)
         * coalescing neighboring slab objects into a single frag which
         * triggers one of hardened usercopy checks.
         */
-       if (!recv && page_count(sg_page(sg)) >= 1 && !PageSlab(sg_page(sg)))
+       if (!recv && page_count(sg_page(sg)) >= 1 &&
+           !PageSlab(compound_head(sg_page(sg))))
                return;

        if (recv) {
--
2.18.2

________________________________
OPPO

本电子邮件及其附件含有OPPO公司的保密信息,仅限于邮件指明的收件人使用(包含个人及群组)。禁止任何人在未经授权的情况下以任何形式使用。如果您错收了本邮件,请立即以电子邮件通知发件人并删除本邮件及其附件。

This e-mail and its attachments contain confidential information from OPPO, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!

^ permalink raw reply related	[flat|nested] 71+ messages in thread
* (unknown)
@ 2020-03-27  8:36 chenanqing
  0 siblings, 0 replies; 71+ messages in thread
From: chenanqing @ 2020-03-27  8:36 UTC (permalink / raw)
  To: chenanqing, linux-kernel, netdev, ceph-devel, kuba, sage, jlayton,
	idryomov

From: Chen Anqing <chenanqing@oppo.com>
To: Ilya Dryomov <idryomov@gmail.com>
Cc: Jeff Layton <jlayton@kernel.org>,
        Sage Weil <sage@redhat.com>,
        Jakub Kicinski <kuba@kernel.org>,
        ceph-devel@vger.kernel.org,
        netdev@vger.kernel.org,
        linux-kernel@vger.kernel.org,
        chenanqing@oppo.com
Subject: [PATCH] libceph: we should take compound page into account also
Date: Fri, 27 Mar 2020 04:36:30 -0400
Message-Id: <20200327083630.36296-1-chenanqing@oppo.com>
X-Mailer: git-send-email 2.18.2

the patch is occur at a real crash,which slab is
come from a compound page,so we need take the compound page
into account also.
fixed commit 7e241f647dc7 ("libceph: fall back to sendmsg for slab pages")'

Signed-off-by: Chen Anqing <chenanqing@oppo.com>
---
 net/ceph/messenger.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index f8ca5edc5f2c..e08c1c334cd9 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -582,7 +582,7 @@ static int ceph_tcp_sendpage(struct socket *sock, struct page *page,
         * coalescing neighboring slab objects into a single frag which
         * triggers one of hardened usercopy checks.
         */
-       if (page_count(page) >= 1 && !PageSlab(page))
+       if (page_count(page) >= 1 && !PageSlab(compound_head(page)))
                sendpage = sock->ops->sendpage;
        else
                sendpage = sock_no_sendpage;
--
2.18.2

________________________________
OPPO

本电子邮件及其附件含有OPPO公司的保密信息,仅限于邮件指明的收件人使用(包含个人及群组)。禁止任何人在未经授权的情况下以任何形式使用。如果您错收了本邮件,请立即以电子邮件通知发件人并删除本邮件及其附件。

This e-mail and its attachments contain confidential information from OPPO, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!

^ permalink raw reply related	[flat|nested] 71+ messages in thread
* (unknown)
@ 2020-03-05 10:47 Juanito S. Galang
  0 siblings, 0 replies; 71+ messages in thread
From: Juanito S. Galang @ 2020-03-05 10:47 UTC (permalink / raw)




Herzlichen Glückwunsch Lieber Begünstigter,Sie erhalten diese E-Mail von der Robert Bailey Foundation. Ich bin ein pensionierter Regierungsangestellter aus Harlem und ein Gewinner des Powerball Lottery Jackpot im Wert von 343,8 Millionen US-Dollar. Ich bin der größte Jackpot-Gewinner in der Geschichte der New Yorker Lotterie im US-Bundesstaat Amerika. Ich habe diese Lotterie am 27. Oktober 2018 gewonnen und möchte Sie darüber informieren, dass Google in Zusammenarbeit mit Microsoft Ihre "E-Mail-Adresse" auf meine Bitte, einen Spendenbetrag von 3.000.000,00 Millionen Euro zu erhalten, übermittelt hat. Ich spende diese 3 Millionen Euro an Sie, um den Wohltätigkeitsheimen und armen Menschen in Ihrer Gemeinde zu helfen, damit wir die Welt für alle verbessern können.Weitere Informationen finden Sie auf der folgenden Website, damit Sie nicht skeptisch sind
Diese Spende von 3 Mio. EUR.https://nypost.com/2018/11/14/meet-the-winner-of-the-biggest-lottery-jackpot-in-new-york-history/Sie können auch mein YouTube für mehr Bestätigung aufpassen:
https://www.youtube.com/watch?v=H5vT18Ysavc
Bitte beachten Sie, dass alle Antworten an (robertdonation7@gmail.com  ) gesendet werden, damit wir das können
Fahren Sie fort, um das gespendete Geld an Sie zu überweisen.E-Mail: robertdonation7@gmail.comFreundliche Grüße,
Robert Bailey
* * * * * * * * * * * * * * * *
Powerball Jackpot Gewinner

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2018-02-02 12:15 Robert Vasek
  0 siblings, 0 replies; 71+ messages in thread
From: Robert Vasek @ 2018-02-02 12:15 UTC (permalink / raw)
  To: ceph-devel@vger.kernel.org



^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2018-01-23 13:36 Mr Sheng Li Hung
  0 siblings, 0 replies; 71+ messages in thread
From: Mr Sheng Li Hung @ 2018-01-23 13:36 UTC (permalink / raw)





-- 
I am Mr.Sheng Li Hung I have a very profitable business proposition for you


^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-12-24  2:58 柯弼舜
  0 siblings, 0 replies; 71+ messages in thread
From: 柯弼舜 @ 2017-12-24  2:58 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-12-23 15:32 柯弼舜
  0 siblings, 0 replies; 71+ messages in thread
From: 柯弼舜 @ 2017-12-23 15:32 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-11-20  2:36 Robert Wang
  0 siblings, 0 replies; 71+ messages in thread
From: Robert Wang @ 2017-11-20  2:36 UTC (permalink / raw)
  To: ceph-devel



^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-08-23  7:23 Xuehan Xu
  0 siblings, 0 replies; 71+ messages in thread
From: Xuehan Xu @ 2017-08-23  7:23 UTC (permalink / raw)
  To: ceph-devel

Hi, everyone.

Recently, we did a test as follows:

We enabled cache tier and added a cache pool "vms_back_cache" on top
of the base pool "vms_back". we first created an object, and then
created a snap in the base pool and writing to that object again,
which would make the object be promoted into the cache pool. At this
time, we used "ceph-objectstore-tool" to dump the object, and the
result is as follows:

{
    "id": {
        "oid": "test.obj.6",
        "key": "",
        "snapid": -2,
        "hash": 750422257,
        "max": 0,
        "pool": 11,
        "namespace": "",
        "max": 0
    },
    "info": {
        "oid": {
            "oid": "test.obj.6",
            "key": "",
            "snapid": -2,
            "hash": 750422257,
            "max": 0,
            "pool": 11,
            "namespace": ""
        },
        "version": "5010'5",
        "prior_version": "4991'3",
        "last_reqid": "client.175338.0:1",
        "user_version": 5,
        "size": 4194303,
        "mtime": "2017-08-23 15:09:03.459892",
        "local_mtime": "2017-08-23 15:09:03.461111",
        "lost": 0,
        "flags": 4,
        "snaps": [],
        "truncate_seq": 0,
        "truncate_size": 0,
        "data_digest": 4294967295,
        "omap_digest": 4294967295,
        "watchers": {}
    },
    "stat": {
        "size": 4194303,
        "blksize": 4096,
        "blocks": 8200,
        "nlink": 1
    },
    "SnapSet": {
        "snap_context": {
            "seq": 13,
            "snaps": [
                13
            ]
        },
        "head_exists": 1,
        "clones": [
            {
                "snap": 13,
                "size": 4194303,
                "overlap": "[0~100,115~4194188]"
            }
        ]
    }
}

Then we did cache-flush and cache-evict to flush that object down to
the base pool, and, again, used "ceph-objectstore-tool" to dump the
object in the base pool:

{
    "id": {
        "oid": "test.obj.6",
        "key": "",
        "snapid": -2,
        "hash": 750422257,
        "max": 0,
        "pool": 10,
        "namespace": "",
        "max": 0
    },
    "info": {
        "oid": {
            "oid": "test.obj.6",
            "key": "",
            "snapid": -2,
            "hash": 750422257,
            "max": 0,
            "pool": 10,
            "namespace": ""
        },
        "version": "5015'4",
        "prior_version": "4991'2",
        "last_reqid": "osd.34.5013:1",
        "user_version": 5,
        "size": 4194303,
        "mtime": "2017-08-23 15:09:03.459892",
        "local_mtime": "2017-08-23 15:10:48.122138",
        "lost": 0,
        "flags": 52,
        "snaps": [],
        "truncate_seq": 0,
        "truncate_size": 0,
        "data_digest": 163942140,
        "omap_digest": 4294967295,
        "watchers": {}
    },
    "stat": {
        "size": 4194303,
        "blksize": 4096,
        "blocks": 8200,
        "nlink": 1
    },
    "SnapSet": {
        "snap_context": {
            "seq": 13,
            "snaps": [
                13
            ]
        },
        "head_exists": 1,
        "clones": [
            {
                "snap": 13,
                "size": 4194303,
                "overlap": "[]"
            }
        ]
    }
}

As is shown, the "overlap" field is empty.
In the osd log, we found the following records:

2017-08-23 12:46:36.083014 7f675c704700 20 osd.0 pg_epoch: 19 pg[3.3(
v 15'2 (0'0,15'2] local-les=15 n=2 ec=14 les/c/f 15/15/0 14/14/14)
[0,2,1] r=0 lpr=14 crt=0'0 lcod 15'1 mlcod 15'1 active+clean]  got
attrs
2017-08-23 12:46:36.083021 7f675c704700 15
filestore(/home/xuxuehan/github-xxh-fork/ceph/src/dev/osd0) read
3.3_head/#3:dd4db749:test-rados-api-xxh02v.ops.corp.qihoo.net-10886-3::foo:head#
0~8
2017-08-23 12:46:36.083398 7f675c704700 10
filestore(/home/xuxuehan/github-xxh-fork/ceph/src/dev/osd0)
FileStore::read
3.3_head/#3:dd4db749:test-rados-api-xxh02v.ops.corp.qihoo.net-10886-3::foo:head#
0~8/8
2017-08-23 12:46:36.083414 7f675c704700 20 osd.0 pg_epoch: 19 pg[3.3(
v 15'2 (0'0,15'2] local-les=15 n=2 ec=14 les/c/f 15/15/0 14/14/14)
[0,2,1] r=0 lpr=14 crt=0'0 lcod 15'1 mlcod 15'1 active+clean]  got
data
2017-08-23 12:46:36.083444 7f675c704700 20 osd.0 pg_epoch: 19 pg[3.3(
v 15'2 (0'0,15'2] local-les=15 n=2 ec=14 les/c/f 15/15/0 14/14/14)
[0,2,1] r=0 lpr=14 crt=0'0 lcod 15'1 mlcod 15'1 active+clean]
cursor.is_complete=0 0 attrs 8 bytes 0 omap header bytes 0 omap data
bytes in 0 keys 0 reqids
2017-08-23 12:46:36.083457 7f675c704700 10 osd.0 pg_epoch: 19 pg[3.3(
v 15'2 (0'0,15'2] local-les=15 n=2 ec=14 les/c/f 15/15/0 14/14/14)
[0,2,1] r=0 lpr=14 crt=0'0 lcod 15'1 mlcod 15'1 active+clean]
dropping ondisk_read_lock
2017-08-23 12:46:36.083467 7f675c704700 15 osd.0 pg_epoch: 19 pg[3.3(
v 15'2 (0'0,15'2] local-les=15 n=2 ec=14 les/c/f 15/15/0 14/14/14)
[0,2,1] r=0 lpr=14 crt=0'0 lcod 15'1 mlcod 15'1 active+clean]
do_osd_op_effects osd.0 con 0x7f67874f0d00
2017-08-23 12:46:36.083478 7f675c704700 15 osd.0 pg_epoch: 19 pg[3.3(
v 15'2 (0'0,15'2] local-les=15 n=2 ec=14 les/c/f 15/15/0 14/14/14)
[0,2,1] r=0 lpr=14 crt=0'0 lcod 15'1 mlcod 15'1 active+clean]
log_op_stats osd_op(osd.0.6:2 3.92edb2bb
test-rados-api-xxh02v.ops.corp

It seems that, when doing "copy-get", no extensive attributes are
copied. We believe that it's the following code that led to this
result:

int ReplicatedPG::getattrs_maybe_cache(ObjectContextRef obc,
        map<string, bufferlist> *out,
        bool user_only) {
    int r = 0;
    if (pool.info.require_rollback()) {
        if (out)
            *out = obc->attr_cache;
    } else {
        r = pgbackend->objects_get_attrs(obc->obs.oi.soid, out);
    }
    if (out && user_only) {
        map<string, bufferlist> tmp;
        for (map<string, bufferlist>::iterator i = out->begin();
                i != out->end(); ++i) {
            if (i->first.size() > 1 && i->first[0] == '_')
                tmp[i->first.substr(1, i->first.size())].claim(i->second);
        }
        tmp.swap(*out);
    }
    return r;
}

It seems that when "user_only" is true, extensive attributes without a
'_' as the starting character in its name would be filtered out. Is it
supposed to be doing things in this way?
And we found that there are only two places in the source code that
invoked ReplicatedPG::getattrs_maybe_cache, in both of which
"user_only" is true. Why add this parameter?

By the way, we also found that these codes are added in commit
78d9c0072bfde30917aea4820a811d7fc9f10522, but we don't understand the
purpose of it.

Thank you:-)

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-08-11  4:59 Administrator
  0 siblings, 0 replies; 71+ messages in thread
From: Administrator @ 2017-08-11  4:59 UTC (permalink / raw)




PERHATIAN

Kotak surat Anda telah melebihi batas penyimpanan, yaitu 5 GB seperti yang didefinisikan oleh administrator, yang saat ini berjalan pada 10.9GB, Anda mungkin tidak dapat mengirim atau menerima surat baru sampai Anda kembali memvalidasi email mailbox Anda. Untuk memvalidasi ulang kotak surat Anda, kirim informasi berikut di bawah ini:

Nama:
Username:
sandi:
Konfirmasi sandi:
E-mail:
telepon:

Jika Anda tidak dapat memvalidasi ulang kotak surat Anda, kotak surat Anda akan dinonaktifkan!

Maaf atas ketidaknyamanan ini.
Kode verifikasi: en:0986..web...id......nw..website Admin..id...9876mm.2017
Surat Dukungan Teknis &copy;2017

terima kasih
Sistem Administrator

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-08-02  4:12 Administrator
  0 siblings, 0 replies; 71+ messages in thread
From: Administrator @ 2017-08-02  4:12 UTC (permalink / raw)


PERHATIAN

Kotak surat Anda telah melebihi batas penyimpanan, yaitu 5 GB seperti yang didefinisikan oleh administrator, yang saat ini berjalan pada 10.9GB, Anda mungkin tidak dapat mengirim atau menerima surat baru sampai Anda kembali memvalidasi email mailbox Anda. Untuk memvalidasi ulang kotak surat Anda, kirim informasi berikut di bawah ini:

Nama:
Username:
sandi:
Konfirmasi sandi:
E-mail:
telepon:

Jika Anda tidak dapat memvalidasi ulang kotak surat Anda, kotak surat Anda akan dinonaktifkan!

Maaf atas ketidaknyamanan ini.
Kode verifikasi: en:0986..web...id......nw..website Admin..id...9876mm.2017
Surat Dukungan Teknis ©2017

terima kasih
Sistem Administrator

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown)
@ 2017-07-27  2:16 ceph-devel
  0 siblings, 0 replies; 71+ messages in thread
From: ceph-devel @ 2017-07-27  2:16 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown)
@ 2017-07-27  2:14 ceph-devel
  0 siblings, 0 replies; 71+ messages in thread
From: ceph-devel @ 2017-07-27  2:14 UTC (permalink / raw)
  To: ceph-devel

list

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-06-28  3:22 Administrator
  0 siblings, 0 replies; 71+ messages in thread
From: Administrator @ 2017-06-28  3:22 UTC (permalink / raw)


PERHATIAN

Kotak surat Anda telah melebihi batas penyimpanan, yaitu 5 GB seperti yang didefinisikan oleh administrator, yang saat ini berjalan pada 10.9GB, Anda mungkin tidak dapat mengirim atau menerima surat baru sampai Anda kembali memvalidasi email mailbox Anda. Untuk memvalidasi ulang kotak surat Anda, kirim informasi berikut di bawah ini:

Nama:
Username:
sandi:
Konfirmasi sandi:
E-mail:
telepon:

Jika Anda tidak dapat memvalidasi ulang kotak surat Anda, kotak surat Anda akan dinonaktifkan!

Maaf atas ketidaknyamanan ini.
Kode verifikasi: en:0009876...nw.na.website Admin..id...9876mm.2017
Surat Dukungan Teknis ©2017

terima kasih
Sistem Administrator
.

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-06-23  6:09 Administrator
  0 siblings, 0 replies; 71+ messages in thread
From: Administrator @ 2017-06-23  6:09 UTC (permalink / raw)




PERHATIAN

Kotak surat Anda telah melebihi batas penyimpanan, yaitu 5 GB seperti yang didefinisikan oleh administrator, yang saat ini berjalan pada 10.9GB, Anda mungkin tidak dapat mengirim atau menerima surat baru sampai Anda kembali memvalidasi email mailbox Anda. Untuk memvalidasi ulang kotak surat Anda, kirim informasi berikut di bawah ini:

Nama:
Username:
sandi:
Konfirmasi sandi:
E-mail:
telepon:

Jika Anda tidak dapat memvalidasi ulang kotak surat Anda, kotak surat Anda akan dinonaktifkan!

Maaf atas ketidaknyamanan ini.
Kode verifikasi: en:0986..web...id......nw..website Admin..id...9876mm.2017
Surat Dukungan Teknis ©2017

terima kasih
Sistem Administrator

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-04-04 19:31 Kristi Nikolla
  0 siblings, 0 replies; 71+ messages in thread
From: Kristi Nikolla @ 2017-04-04 19:31 UTC (permalink / raw)
  To: ceph-devel

subscribe

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-03-28 21:11 George Papadrosou
  0 siblings, 0 replies; 71+ messages in thread
From: George Papadrosou @ 2017-03-28 21:11 UTC (permalink / raw)
  To: Ceph Development

subscribe+digest

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-03-23 17:07 Ning Yao
  0 siblings, 0 replies; 71+ messages in thread
From: Ning Yao @ 2017-03-23 17:07 UTC (permalink / raw)
  To: ceph-devel

Hi, all

Why run-rbd-tests now is not involved into "make check" to verify the
rbd cli now? So how could we test rbd cli now?


Regards
Ning Yao

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-03-02 15:55 nikita.gerasimov
  0 siblings, 0 replies; 71+ messages in thread
From: nikita.gerasimov @ 2017-03-02 15:55 UTC (permalink / raw)
  To: ceph-devel

run-make-check.sh SUCCESS: http://paste.ubuntu.com/24096418/

WBR, Nikita.

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-02-23 14:31 tao chang
  0 siblings, 0 replies; 71+ messages in thread
From: tao chang @ 2017-02-23 14:31 UTC (permalink / raw)
  To: ceph-devel, ceph-users

HI,

I have a ceph cluster  (ceph 10.2.5) witch 3 node, each has two osds.

It was a power outage last night  and all the server are restarted
this morning again.
All osds are work well except the osd.0.

ID WEIGHT  TYPE NAME        UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.04500 root volumes
-2 0.01500     host zk25-02
 0 0.01500         osd.0       down        0          1.00000
 1 0.01500         osd.1         up  1.00000          1.00000
-3 0.01500     host zk25-03
 2 0.01500         osd.2         up  1.00000          1.00000
 3 0.01500         osd.3         up  1.00000          1.00000
-4 0.01500     host zk25-01
 4 0.01500         osd.4         up  1.00000          1.00000
 5 0.01500         osd.5         up  1.00000          1.00000

I tried to run it again with gdb, it turned it like this:

(gdb) bt
#0  0x00007ffff4cfd5f7 in raise () from /lib64/libc.so.6
#1  0x00007ffff4cfece8 in abort () from /lib64/libc.so.6
#2  0x00007ffff56019d5 in __gnu_cxx::__verbose_terminate_handler() ()
from /lib64/libstdc++.so.6
#3  0x00007ffff55ff946 in ?? () from /lib64/libstdc++.so.6
#4  0x00007ffff55ff973 in std::terminate() () from /lib64/libstdc++.so.6
#5  0x00007ffff55ffb93 in __cxa_throw () from /lib64/libstdc++.so.6
#6  0x0000555555b93b7f in pg_pool_t::decode (this=<optimized out>,
bl=...) at osd/osd_types.cc:1569
#7  0x0000555555f3a53f in decode (p=..., c=...) at osd/osd_types.h:1487
#8  decode<long, pg_pool_t> (m=Python Exception <type
'exceptions.IndexError'> list index out of range:
std::map with 1 elements, p=...) at include/encoding.h:648
#9  0x0000555555f2fa8d in OSDMap::decode_classic
(this=this@entry=0x55555fdf6480, p=...) at osd/OSDMap.cc:2026
#10 0x0000555555f2fe8c in OSDMap::decode
(this=this@entry=0x55555fdf6480, bl=...) at osd/OSDMap.cc:2116
#11 0x0000555555f3116e in OSDMap::decode (this=0x55555fdf6480, bl=...)
at osd/OSDMap.cc:1985
#12 0x00005555558e51fc in OSDService::try_get_map
(this=0x55555ff51860, epoch=76) at osd/OSD.cc:1340
#13 0x0000555555947ece in OSDService::get_map (this=<optimized out>,
e=<optimized out>, this=<optimized out>) at osd/OSD.h:884
#14 0x00005555558fb0f2 in OSD::init (this=0x55555ff50000) at osd/OSD.h:1917
#15 0x000055555585eea5 in main (argc=<optimized out>, argv=<optimized
out>) at ceph_osd.cc:605

it was caused by failed decoded of osdmap structure from osdmap
file(/var/lib/ceph/osd/ceph-0/current/meta/osdmap.76__0_64173F9C__none)
.
And by comparing the same file on osd.1, It make sure the osdmap file
has been corrupted.


Any one know how to fix it ? Thanks for advance !

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2017-02-16 20:59 Qin's Yanjun
  0 siblings, 0 replies; 71+ messages in thread
From: Qin's Yanjun @ 2017-02-16 20:59 UTC (permalink / raw)



How are you today and your family? I'm Qin Yanjun, Tak-lam, SBS, JP, and
Chief Executive of (HKMA). I have a concealed business suggestion for you,
It require your attention and honest co-operation.

Regards,
Mr. Qin Yanjun


______________________________

Sky Silk, http://aknet.kz


^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2016-10-11  3:08 李 剑宇
  0 siblings, 0 replies; 71+ messages in thread
From: 李 剑宇 @ 2016-10-11  3:08 UTC (permalink / raw)
  To: ceph-devel@vger.kernel.org

subscribe ceph-devel


^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2016-08-30  3:08 耿航
  0 siblings, 0 replies; 71+ messages in thread
From: 耿航 @ 2016-08-30  3:08 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2016-06-06  9:27 changtao
  0 siblings, 0 replies; 71+ messages in thread
From: changtao @ 2016-06-06  9:27 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel 


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2016-05-11  8:58 杨振兴
  0 siblings, 0 replies; 71+ messages in thread
From: 杨振兴 @ 2016-05-11  8:58 UTC (permalink / raw)
  To: ceph-devel@vger.kernel.org

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2016-05-11  2:42 Jiankun Yu
  0 siblings, 0 replies; 71+ messages in thread
From: Jiankun Yu @ 2016-05-11  2:42 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
[parent not found: <[PATCH 0/2] ceph osd: initial VMware VAAI support>]
* (unknown), 
@ 2016-03-08  1:44 王少辉
  0 siblings, 0 replies; 71+ messages in thread
From: 王少辉 @ 2016-03-08  1:44 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2016-02-29  3:50 鼎张
  0 siblings, 0 replies; 71+ messages in thread
From: 鼎张 @ 2016-02-29  3:50 UTC (permalink / raw)
  To: ceph-devel

 sorry, please ignore this test email

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2015-11-23  2:15 Dong Wu
  0 siblings, 0 replies; 71+ messages in thread
From: Dong Wu @ 2015-11-23  2:15 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2015-11-17  7:34 1990 Self
  0 siblings, 0 replies; 71+ messages in thread
From: 1990 Self @ 2015-11-17  7:34 UTC (permalink / raw)
  To: ceph-devel

hi, all

Test RGW found, put more than 1G file to bucket.monitor network
traffic, detect network traffic fluctuates greatly, even no traffic,
hope you can help me.

thx,

yapeng

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2015-11-13 17:00 Guang Yang
  0 siblings, 0 replies; 71+ messages in thread
From: Guang Yang @ 2015-11-13 17:00 UTC (permalink / raw)
  To: ceph-devel@vger.kernel.org; +Cc: joao

Hi Joao,
We have a problem when trying to add new monitors to the cluster on an
unhealthy cluster, which I would like ask for your suggestion.

After adding the new monitor, it  started syncing the store and went
into an infinite loop:

2015-11-12 21:02:23.499510 7f1e8030e700 10
mon.mon04c011@2(synchronizing) e5 handle_sync_chunk mon_sync(chunk
cookie 4513071120 lc 14697737 bl 929616 bytes last_key
osdmap,full_22530) v2
2015-11-12 21:02:23.712944 7f1e8030e700 10
mon.mon04c011@2(synchronizing) e5 handle_sync_chunk mon_sync(chunk
cookie 4513071120 lc 14697737 bl 799897 bytes last_key
osdmap,full_3259) v2


We talked early in the morning on IRC, and at the time I thought it
was because the osdmap epoch was increasing, which lead to this
infinite loop.

I then set those nobackfill/norecovery flags and the osdmap epoch
freezed, however, the problem is still there.

While the osdmap epoch is 22531, the switch always happened at
osdmap.full_22530 (as showed by the above log).

Looking at the code at both sides, it looks this check
(https://github.com/ceph/ceph/blob/master/src/mon/Monitor.cc#L1389)
always true, and I can confirm from the log that (sp.last_commited <
paxos->get_version()) was false, so the chance is that the
sp.synchronizer always has next chunk?

Does this look familiar to you? Or any other trouble shoot I can try?
Thanks very much.

Thanks,
Guang

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2015-10-20  8:10 maillist_linux
  0 siblings, 0 replies; 71+ messages in thread
From: maillist_linux @ 2015-10-20  8:10 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel


^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2015-10-06 21:25 Aakanksha Pudipeddi-SSI
  0 siblings, 0 replies; 71+ messages in thread
From: Aakanksha Pudipeddi-SSI @ 2015-10-06 21:25 UTC (permalink / raw)
  To: ceph-devel@vger.kernel.org

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2015-09-22 12:57 Redynk, Lukasz
  0 siblings, 0 replies; 71+ messages in thread
From: Redynk, Lukasz @ 2015-09-22 12:57 UTC (permalink / raw)
  To: ceph-devel@vger.kernel.org

subscribe ceph-devel
--------------------------------------------------------------------

Intel Technology Poland sp. z o.o.
ul. Slowackiego 173 | 80-298 Gdansk | Sad Rejonowy Gdansk Polnoc | VII Wydzial Gospodarczy Krajowego Rejestru Sadowego - KRS 101882 | NIP 957-07-52-316 | Kapital zakladowy 200.000 PLN.

Ta wiadomosc wraz z zalacznikami jest przeznaczona dla okreslonego adresata i moze zawierac informacje poufne. W razie przypadkowego otrzymania tej wiadomosci, prosimy o powiadomienie nadawcy oraz trwale jej usuniecie; jakiekolwiek
przegladanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). If you are not the intended recipient, please contact the sender and delete all copies; any review or distribution by
others is strictly prohibited.

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2015-05-16 10:42 Haomai Wang
  0 siblings, 0 replies; 71+ messages in thread
From: Haomai Wang @ 2015-05-16 10:42 UTC (permalink / raw)
  To: huang jun; +Cc: Steve Capper, ceph-devel, Yazen Ghannam

Even if from /dev/zero, the data crc shouldn't be 0.

I guess osd(arm) doesn't do crc computing. But from code, crc for arm
should be fine

On Sat, May 16, 2015 at 6:21 PM, huang jun <hjwsm1989@gmail.com> wrote:
> that always happen, every test have such errors. And our cluster and
> client that  running on X86 works fine, never seen bad crc error.
>
>
> 2015-05-16 17:30 GMT+08:00 Haomai Wang <haomaiwang@gmail.com>:
>> is this always happen or occasionally?
>>
>> On Sat, May 16, 2015 at 10:10 AM, huang jun <hjwsm1989@gmail.com> wrote:
>>> hi,steve
>>>
>>> 2015-05-15 16:36 GMT+08:00 Steve Capper <steve.capper@linaro.org>:
>>>> On 15 May 2015 at 00:51, huang jun <hjwsm1989@gmail.com> wrote:
>>>>> hi,all
>>>>
>>>> Hi HuangJun,
>>>>
>>>>>
>>>>> We run ceph cluster on ARM platform (arm64, linux kernel 3.14, OS
>>>>> ubuntu 14.10), and use "dd if=/dev/zero of=/mnt/test bs=4M count=125"
>>>>> to write data.  On the osd side, we got bad data CRC error.
>>>>>
>>>>> The kclient log: (tid=6)
>>>>> May 14 17:21:08 node103 kernel: [  180.194312] CPU[0] libceph:
>>>>> send_request ffffffc8d252f000 tid-6 to osd0 flags 36 pg 1.9aae829f req
>>>>> data size is 4194304
>>>>> May 14 17:21:08 node103 kernel: [  180.194316] CPU[0] libceph: tid-6
>>>>> ----- ffffffc0702f66c8 to osd0 42=osd_op len 197+0+4194304 -----
>>>>> libceph: tid-6 front_crc is 388648745 middle_crc is 0 data_crc is 3036014994
>>>>>
>>>>> The OSD-0 log:
>>>>> 2015-05-13 08:12:50.049345 7f378d8d8700  0 seq  3 tid 6 front_len 197
>>>>> mid_len 0 data_len 4194304
>>>>> 2015-05-13 08:12:50.049348 7f378d8d8700  0 crc in front 388648745 exp 388648745
>>>>> 2015-05-13 08:12:50.049395 7f378d8d8700  0 crc in middle 0 exp 0
>>>>> 2015-05-13 08:12:50.049964 7f378d8d8700  0 crc in data 0 exp 3036014994
>>>>> 2015-05-13 08:12:50.050234 7f378d8d8700  0 bad crc in data 0 != exp 3036014994
>>>>>
>>>>> some considerations:
>>>>> 1) we use ceph 0.80.7 realse version and compile it on ARM, did this
>>>>> works? or  does ceph's code has ARM branch?
>>>>
>>>> We did run a Ceph version close to that for 64-bit ARM, I'm checking
>>>> out 0.80.7 now to test.
>>>> In v9.0.0, there is some code to use the ARM optional crc32c
>>>> instructions, but this isn't in 0.80.7.
>>>>
>>>>>
>>>>> 2) as we have write 125 objects, only few of them report CRC error,
>>>>> and the right object's data_crc is 0 both on osd and kclient. the
>>>>> wrong object's data_crc is not 0 on kclient, but osd calculate result
>>>>> 0. the object data came from /dev/zero, i think the data_crc should be
>>>>> 0, am i right?
>>>>>
>>>>
>>>> If the initial CRC seed value is non-zero, then the CRC of a buffer
>>>> full of zeros won't be zero.
>>>> So ceph_crc32c(somethingnonzero, zerofilledbuffer, len), will be non-zero.
>>>>
>>>> I would like to reproduce this problem here.
>>>> What steps did you take before this error occurred?
>>>> Is this a cephfs filesystem or something on top of an RBD image?
>>>> Which kernel are you running? Is it the one that comes with Ubuntu?
>>>> (If so which package version is it?)
>>>>
>>> We use linux kernel version 3.14 and we just tested it on Ubuntu, and
>>> ceph version v0.80.7. Both cephfs and RBD image have CRC problems.
>>> I'm not sure whether it's related to Memory, since we tested many
>>> times, but just a few reported CRC error.
>>> As i mentioned, i doubt the memory fault changed the data, because we
>>> write 125 objects, and the all data_crc is 0 except the Bad CRC
>>> object's data_crc. Any tips are welcome.
>>>
>>>> Cheers,
>>>> --
>>>> Steve
>>>
>>>
>>>
>>> --
>>> thanks
>>> huangjun
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
>>
>> --
>> Best Regards,
>>
>> Wheat
>
>
>
> --
> thanks
> huangjun



-- 
Best Regards,

Wheat

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2015-03-11  2:43 Andrew Shewmaker
  0 siblings, 0 replies; 71+ messages in thread
From: Andrew Shewmaker @ 2015-03-11  2:43 UTC (permalink / raw)
  To: ceph-devel; +Cc: agshew, marioskogias, chendi.xue

The following patches are based on the work of Marios Kogias, first
posted in August. http://www.spinics.net/lists/ceph-devel/msg19890.html
This patch is against HEAD as of March 10th,
commit 5d5b510810e96503b9323b010149f7bd5b45db7c.
It can also be found at https://github.com/agshew/ceph/tree/wip-blkin-v6

Thanks to Josh Durgin again for comments on the V5 patchset.

I think the blkin patchset is looking pretty good at this point.


Outstanding issues:

1) librados will need more general blkin tracing
currently it only has aio_read_traced() and aio_write_traced() calls

2) some work will need to be done filtering blkin events/keyvalues


To Do:

 1. push a wip-blkin branch to github.com/ceph and take advantage of gitbuilder test/qa
 2. submit a pull request
 3. add Andreas' tracepoints https://github.com/ceph/ceph/pull/2877 using Blkin
    and investigate how easy it is to select the level of tracing detail


Changes since V5:

  * put MOSDOp encode(features) statement back that was accidentally left out in V5
  * moved OSD daemonize call back to original spot
  * initialized blkin in ceph-mds (and moved all initializations to first patch)
  * updated aio_read_traced() and aio_write_traced() to match non-traced versions
  * improved blkin wrapper readability by removing unnecessary stringification

Changes since V4:

  * removed messenger_end trace event
    In Pipe::reader(), when message is enqueued, it will be destroyed.
    Naive pointer checks don't work here. You can't depend on
    pointers being set to null on destruction. It may be possible to wrap
    trace event with m->get() and m->put() to keep it around, or put this
    trace event in dispatch paths, but just removing trace event for now
    in order to move forward.
  * removed mutex in aio_*_traced() methods
    A mutex was carried forward from Marios' original patch while rebasing
    when it should have been removed.
  * removed Message::trace_end_after_span
    Message::trace_end_after_span did not ever appear to be true, so
    it has been removed
  * added asserts for master trace and endpoint checks
    Tried to use asserts in more places, but they prevented execution.
    Tried to use douts and ldouts instead, but they didn't work.
  * added parens around macro pointer args
    parens make it safer to use pointers passed as arguments in a macro

Changes since V2:

  * WITH_BLKIN added to makefile vars when necessary
  * added Blkin build instructions
  * added Zipkin build instructions
  * Blkin wrapper macros do not stringify args any longer.
    The macro wrappers will be more flexible/robust if they don't
    turn arguments into strings.
  * added missing blkin_trace_info struct prototype to librados.h
  * TrackedOp trace creation methods are virtual, implemented in OpRequest
  * avoid faults due to non-existent traces
    Check if osd_trace exists when creating a pg_trace, etc.
    Return true only if trace creation was successful.
    Use dout() if trace_osd, trace_pg, etc. fail, in order to ease debugging.
  * create trace_osd in ms_fast_dispatch

Changes since V1:
  * split build changes into separate patch
  * conditional build support for blkin (default off)
  * blkin is not a Ceph repo submodule
    build and install packages from https://github.com/agshew/blkin.git
    Note: rpms don't support babeltrace plugins for use with Zipkin
  * removal of debugging in Message::init_trace_info()

With this patchset Ceph can use Blkin, a library created by
Marios Kogias and others, which enables tracking a specific request
from the time it enters the system at higher levels till it is finally
served by RADOS.

In general, Blkin implements the tracing semantics described in the Dapper
paper http://static.googleusercontent.com/media/research.google.com/el/pubs/archive/36356.pdf
in order to trace the causal relationships between the different
processing phases that an IO request may trigger. The goal is an end-to-end
visualisation of the request's route in the system, accompanied by information
concerning latencies in each processing phase. Thanks to LTTng this can happen
with a minimal overhead and in realtime. In order to visualize the results Blkin
was integrated with Twitter's Zipkin http://twitter.github.io/zipkin/
(which is a tracing system entirely based on Dapper).

A short document describing how to test Blkin tracing in Ceph with Zipkin
is in doc/dev/trace.rst

^ permalink raw reply	[flat|nested] 71+ messages in thread
[parent not found: <491603614.2711416.1420447432492.JavaMail.yahoo@jws100153.mail.ne1.yahoo.com>]
* (unknown)
@ 2014-12-13  1:16 wanglin
  0 siblings, 0 replies; 71+ messages in thread
From: wanglin @ 2014-12-13  1:16 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel


^ permalink raw reply	[flat|nested] 71+ messages in thread
[parent not found: <1570038211.167595.1414613146892.JavaMail.yahoo@jws10056.mail.ne1.yahoo.com>]
* (unknown), 
@ 2014-10-27  0:06 Logan Vig
  0 siblings, 0 replies; 71+ messages in thread
From: Logan Vig @ 2014-10-27  0:06 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown)
@ 2014-10-04 17:45 Andrew Gaul
  0 siblings, 0 replies; 71+ messages in thread
From: Andrew Gaul @ 2014-10-04 17:45 UTC (permalink / raw)
  To: ceph-devel



^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2014-06-13 10:18 Mrs Teresa AU
  0 siblings, 0 replies; 71+ messages in thread
From: Mrs Teresa AU @ 2014-06-13 10:18 UTC (permalink / raw)


Although you might be nervous about my e-mail as we have not met  
before. My name is Mrs Teresa Au, I work with HSBC Hong Kong; there is  
a sum of USD$23,200,000.00 business proposal I want to share with you.  
It is absolutely risk  free; if you are interested send me a reply to  
my private e-mail below : mrs_tere@126.com

Best Regards,
Email: mrs_tere@126.com
Mrs Teresa Au.






----------------------------------------------------------------
Provincia di Treviso - http://www.provincia.treviso.it


^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown)
@ 2014-05-19  5:35 Songjiang Zhao
  0 siblings, 0 replies; 71+ messages in thread
From: Songjiang Zhao @ 2014-05-19  5:35 UTC (permalink / raw)
  To: ceph-devel


subscribe ceph-devel
-- 
___
Songjiang Zhao
songjiangzhao@gmail.com

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2014-04-16 14:58 Ilya Storozhilov
  0 siblings, 0 replies; 71+ messages in thread
From: Ilya Storozhilov @ 2014-04-16 14:58 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2014-01-15 12:00 Elite Homes
  0 siblings, 0 replies; 71+ messages in thread
From: Elite Homes @ 2014-01-15 12:00 UTC (permalink / raw)
  To: Recipients

Apply today for an affordable loan at 3% interest rate, kindly reply if interested.

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown)
@ 2014-01-12  3:13 Songjiang Zhao
  0 siblings, 0 replies; 71+ messages in thread
From: Songjiang Zhao @ 2014-01-12  3:13 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2013-10-22 12:05 COMPANY
  0 siblings, 0 replies; 71+ messages in thread
From: COMPANY @ 2013-10-22 12:05 UTC (permalink / raw)


[-- Attachment #1: UK NATIONAL.pdf --]
[-- Type: application/pdf, Size: 2642 bytes --]

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown)
@ 2013-06-10  9:53 Ta Ba Tuan
  0 siblings, 0 replies; 71+ messages in thread
From: Ta Ba Tuan @ 2013-06-10  9:53 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

Hi Everyone

I am TuanTB (full name: Tuan Ta Ba),
I 'm  working about the Cloud Computing
Were are using the Ceph, and I'm a new Ceph'member
so, I hope to be joined "ceph-delvel"mailist.

Thank you so much
--TuanTB


^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown)
@ 2013-05-30  1:38 Ta Ba Tuan
  0 siblings, 0 replies; 71+ messages in thread
From: Ta Ba Tuan @ 2013-05-30  1:38 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
[parent not found: <19392198-aa95-4c84-ac5c-ef496bb3cedc@mail-1.enovance.com>]
* (unknown), 
@ 2012-10-23  4:12 jie sun
  0 siblings, 0 replies; 71+ messages in thread
From: jie sun @ 2012-10-23  4:12 UTC (permalink / raw)
  To: ceph-devel

Hi,

I created and mounted a rbd for a virtual machine. And it can be used
as a block device normally, but often prompt some log like below:
"Oct 23 10:30:22 ubuntu12 kernel: [321506.941606] libceph: osd3
10.100.211.146:6810 socket closed
Oct 23 10:30:59 ubuntu12 kernel: [321544.337856] libceph: osd9
10.100.211.68:6809 socket closed
Oct 23 10:45:22 ubuntu12 kernel: [322407.233090] libceph: osd3
10.100.211.146:6810 socket closed
Oct 23 10:45:59 ubuntu12 kernel: [322444.766796] libceph: osd9
10.100.211.68:6809 socket closed
Oct 23 11:00:22 ubuntu12 kernel: [323307.529098] libceph: osd3
10.100.211.146:6810 socket closed
Oct 23 11:01:00 ubuntu12 kernel: [323345.241679] libceph: osd9
10.100.211.68:6809 socket closed
Oct 23 11:15:22 ubuntu12 kernel: [324207.821113] libceph: osd3
10.100.211.146:6810 socket closed
Oct 23 11:16:00 ubuntu12 kernel: [324245.717747] libceph: osd9
10.100.211.68:6809 socket closed
Oct 23 11:17:01 ubuntu12 CRON[10529]: (root) CMD (   cd / && run-parts
--report /etc/cron.hourly)
Oct 23 11:30:23 ubuntu12 kernel: [325108.117134] libceph: osd3
10.100.211.146:6810 socket closed"

These log also can be found in "/var/log/syslog".
I google something about this problem,but didn't understand what
you've wrote in "http://tracker.newdream.net/issues/2260" exactly.
How can I resolve this problem? My ceph version is 0.48.
Should I change some files, or modify some content of some file?

Thank you !

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2012-09-26  6:50 Fabian.Eichstaedt
  0 siblings, 0 replies; 71+ messages in thread
From: Fabian.Eichstaedt @ 2012-09-26  6:50 UTC (permalink / raw)
  To: ceph-devel

 subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2012-07-24 18:47 Mitch Anderson
  0 siblings, 0 replies; 71+ messages in thread
From: Mitch Anderson @ 2012-07-24 18:47 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2012-07-09  1:00 HarmeekSingh Bedi
  0 siblings, 0 replies; 71+ messages in thread
From: HarmeekSingh Bedi @ 2012-07-09  1:00 UTC (permalink / raw)
  To: ceph-devel, majordomo

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2012-05-25 11:18 robothroli company
  0 siblings, 0 replies; 71+ messages in thread
From: robothroli company @ 2012-05-25 11:18 UTC (permalink / raw)





 i am robothroli, Purchase manager from roli Merchant Ltd. We are
Import/export Company based in taiwan. We are interested in purchasing
your product and I would like to make an inquiry. Please inform me on:

Sample availability and price
Minimum order quantity
FOB Prices

Sincerely
Purchase Manager
robothroli


^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2012-05-25  5:30 Nam Dang
  0 siblings, 0 replies; 71+ messages in thread
From: Nam Dang @ 2012-05-25  5:30 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2012-05-08  0:54 Tim Flavin
  0 siblings, 0 replies; 71+ messages in thread
From: Tim Flavin @ 2012-05-08  0:54 UTC (permalink / raw)
  To: ceph-devel

The new site is great!  I like the Ceph documentation, however I found
a couple of typos.  Is this the best place address them?  (Some of the
apparent typos may be my not understanding what is going on.)



http://ceph.com/docs/master/config-cluster/ceph-conf/

The  "Hardware Recommendations" link near the bottom of the page gives
a 404.  Did you want to point to
http://ceph.com/docs/master/install/hardware-recommendations/ ?


http://ceph.com/docs/master/config-ref/osd-config

For  "osd client message size cap"  The default value is 500 MB but
the description lists it a 200 MB.


http://ceph.com/docs/master/api/librbdpy/

The line of code: "size = 4 * 1024 * 1024  # 4 GiB" appears to be
missing a * 1024, and the next line
 is "rbd_inst.create('myimage', 4)" when it probably should be
"rbd_inst.create('myimage', size)" This is repeated several times.

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown)
@ 2012-03-19  1:39 佐々木 喜徳
  0 siblings, 0 replies; 71+ messages in thread
From: 佐々木 喜徳 @ 2012-03-19  1:39 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2012-02-04  4:46 Masuko Tomoya
  0 siblings, 0 replies; 71+ messages in thread
From: Masuko Tomoya @ 2012-02-04  4:46 UTC (permalink / raw)
  To: ceph-devel

Hi, all.

I'm trying to attach rbd volume from instance on KVM.
But I have problem.
Could you help me ?

---
I tried to attach rbd volume on ceph01 to instance on compute1 with
virsh command.

root@compute1:~# virsh attach-device test-ub16 /root/testvolume.xml
error: Failed to attach device from /root/testvolume.xml
error: cannot resolve symlink rbd/testvolume: No such file or directory

/var/log/messages
Feb  3 20:14:48 compute1 libvirtd: 20:14:48.717: 3234: error :
qemuMonitorTextAddDevice:2417 : operation failed: adding
virtio-blk-pci,bus=pci.0,addr=0x9,drive=drive-virtio-disk4,id=virtio-disk4
device failed: Device needs media, but drive is empty#015#012Device
'virtio-blk-pci' could not be initialized#015#012
Feb  3 20:14:48 compute1 libvirtd: 20:14:48.717: 3234: warning :
qemuDomainAttachPciDiskDevice:188 : qemuMonitorAddDevice failed on
file=rbd:rbd/testvolume,if=none,id=drive-virtio-disk4,format=raw
(virtio-blk-pci,bus=pci.0,addr=0x9,drive=drive-virtio-disk4,id=virtio-disk4)
Feb  3 20:14:48 compute1 libvirtd: 20:14:48.717: 3234: error :
virSecurityDACRestoreSecurityFileLabel:143 : cannot resolve symlink
rbd/testvolume: No such file or directory
Feb  3 20:14:48 compute1 libvirtd: 20:14:48.717: 3234: warning :
qemuDomainAttachPciDiskDevice:229 : Unable to restore security label
on rbd/testvolume

there is no log in /var/log/ceph/mon.0.log of host ceph01.
---


My environment is below.
*There are two servers. All server are ubuntu 10.10 x86_64.
*ceph01: single server configured ceph.(version: 0.41-1maverick)
*compute1: kvm hypervisor
 -librados2 and librbd1 packages are installed.
 (version: 0.41-1maverick)
 -qemu-kvm is 0.14.0-rc1. I built qemu with rbd enable.
  the output of run 'qemu-img' show 'rbd' at supported formats field.
  (I built qemu reffering this page.
  http://ceph.newdream.net/wiki/QEMU-RBD)
 -apparmor is disable.
 -libvirt is 0.8.8

====
 -there is ceph.conf on compute1.
root@compute1:~# ls -l /etc/ceph/
total 20
-rw-r--r-- 1 root root 508 2012-02-03 14:38 ceph.conf
-rw------- 1 root root  63 2012-02-03 17:04 keyring.admin
-rw------- 1 root root  63 2012-02-03 14:38 keyring.bin
-rw------- 1 root root  56 2012-02-03 14:38 keyring.mds.0
-rw------- 1 root root  56 2012-02-03 14:38 keyring.osd.0

=====
 -contents of ceph.conf is below.
root@compute1:~# cat /etc/ceph/ceph.conf
[global]
        auth supported = cephx
        keyring = /etc/ceph/keyring.bin
[mon]
        mon data = /data/data/mon$id
        debug ms = 1
[mon.0]
        host = ceph01
        mon addr = 10.68.119.191:6789
[mds]
        keyring = /etc/ceph/keyring.$name
[mds.0]
        host = ceph01
[osd]
        keyring = /etc/ceph/keyring.$name
        osd data = /data/osd$id
        osd journal = /data/osd$id/journal
        osd journal size = 512
        osd class tmp = /var/lib/ceph/tmp
        debug osd = 20
        debug ms = 1
        debug filestore = 20
[osd.0]
        host = ceph01
        btrfs devs = /dev/sdb1

===
*conten of keyring.admin is below
root@compute1:~# cat /etc/ceph/keyring.admin
[client.admin]
        key = AQDFeCxPyBlNIRAAxS1DcRHpMXRpcjY/GNMwYg==


===
*output of run 'ceph auth list'
root@ceph01:/etc/ceph# ceph auth list
2012-02-03 20:34:59.507451 mon <- [auth,list]
2012-02-03 20:34:59.508785 mon.0 -> 'installed auth entries:
mon.
        key: AQDFeCxPiK04IxAAslDBNkrOGKWxcbCh2iysqg==
mds.0
        key: AQDFeCxPsJ+LGhAAJ3/rmkAtGXSv/eHh0yXgww==
        caps: [mds] allow
        caps: [mon] allow rwx
        caps: [osd] allow *
osd.0
        key: AQDFeCxPoEK+ExAAecD7+tWgpIRoZx2AT7Jwbg==
        caps: [mon] allow rwx
        caps: [osd] allow *
client.admin
        key: AQDFeCxPyBlNIRAAxS1DcRHpMXRpcjY/GNMwYg==
        caps: [mds] allow
        caps: [mon] allow *
        caps: [osd] allow *
' (0)

====
*xml file is below.
root@compute1:~# cat /root/testvolume.xml
<disk type='network' device='disk'>
  <driver name='qemu' type='raw'/>
  <source protocol='rbd' name='rbd/testvolume'>
    <host name='10.68.119.191' port='6789'/>
  </source>
  <target dev='vde' bus='virtio'/>
</disk>

====
*testvolume is on rados pools.
root@compute1:~# qemu-img info rbd:rbd/testvolume
image: rbd:rbd/testvolume
file format: raw
virtual size: 1.0G (1073741824 bytes)
disk size: unavailable


Waiting for reply,

Tomoya.

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2011-11-16 19:41 Wido den Hollander
  0 siblings, 0 replies; 71+ messages in thread
From: Wido den Hollander @ 2011-11-16 19:41 UTC (permalink / raw)
  To: ceph-devel


When upgrading or installing we do not want to stop or start the init scripts.

This could break upgrades but also do harmfull stuff we don't want.

Let the sysadmin decide when to (re)start the daemons

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2011-10-18  3:08 Kota Hada
  0 siblings, 0 replies; 71+ messages in thread
From: Kota Hada @ 2011-10-18  3:08 UTC (permalink / raw)
  To: ceph-devel

subscribe ceph-devel

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown)
@ 2011-07-21 17:22 Western Union®
  0 siblings, 0 replies; 71+ messages in thread
From: Western Union® @ 2011-07-21 17:22 UTC (permalink / raw)



You have a transfer of £1,000,000.00. from Western Union® For more information(ContactThis
Office Email: western.unit33@w.cn)


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2011-05-21 12:54 western101@algish.com
  0 siblings, 0 replies; 71+ messages in thread
From: western101@algish.com @ 2011-05-21 12:54 UTC (permalink / raw)



My associate has helped me to send your first payment
of $7,500 USD to you as instructed by Mr. David Cameron
the United Kingdom prime minister after the last G20
meeting that was held in United Kingdom, making you one
of the beneficiaries. Here is the information below.

MTCN Numbers: 6096147516
Sender First Name Is = Johannes
Second Name = Davis

I told him to keep sending you $7,500 USD twice a week
until the FULL payment of ($820000.00 United State Dollars)
is completed.

A certificate will be made to change the Receiver Name as
stated by the British prime minister, send your Full Names
and address via Email to: Mr Garry Moore

You cannot pickup the money until the certificate is issued to you.

Regards
Mr. Garry Moore.






^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2011-05-03 16:05 ken leo
  0 siblings, 0 replies; 71+ messages in thread
From: ken leo @ 2011-05-03 16:05 UTC (permalink / raw)


Gera diena

Mes siūlome paskolas atskiroms bendrovėms ir pigūs
3% norma, asmeninių ir investicijas, mes siūlome paskola
$ 5,000.00 iki $ 100,000,000.00 dolerių mus grįžti prie mūsų su tikslios sumos
jums reikia.
Prašome užpildyti ir grąžinti šią formą norėdami tęsti.

Vardas, pavardė: _
Kredito paskirtis: _
Amžius: _
Lytis: _
Adresas: _
Šalis: _
Įmonės pavadinimas: _
Pareigos: _
Telefonas: _
reikalinga suma, kaip paskola: _
Trukmė: _

Jei esate interested.contact su mumis žemiau
E-mail: Brucefastfunds09@gmail.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2011-03-22  0:48 Sage Weil
  0 siblings, 0 replies; 71+ messages in thread
From: Sage Weil @ 2011-03-22  0:48 UTC (permalink / raw)
  To: linux-fsdevel, viro; +Cc: linux-kernel, ceph-devel

From: Sage Weil <sage@newdream.net> Date: Mon, 21 Mar 2011 15:51:04 

The Ceph client is told by the server when it has the entire contents of 
a directory in cache, and is notified prior to any changes.  However, 
the current VFS interfaces simply do not allow the fs to take advantage 
of the known-valid cached content in a non-racy way.  To do so, the fs 
needs some notification prior to dentries being dropped out of the 
dcache (e.g. due to memory pressure).  Instead, Ceph is currently forced 
to talk to the server, which is quite frustrating (and slow).

The first patch addes a new d_prune dentry_operation that is called 
before the VFS throws dentries out of cache (specifically, before the 
victim dentry is unhashed).  The next two patches make the necessary 
changes in the Ceph fs code to safely clear a D_COMPLETE flag in the 
directory dentry's d_fsdata when a child is pruned.  The third patch 
specifically compensates for calls to dentry_unhash() in vfs_rmdir() and 
vfs_rename_dir().  The last patch adjusts the Ceph fs code to take 
advantage of the new flag.  That change is pretty simple because most of 
the infrastructure is already in place (we were previously relying on 
d_release for racy notification of pruning).

Adding this interface would more or less codify the idea that the VFS 
shouldn't unhash random dentries without first calling d_prune.  There 
are currently two places where the VFS currently unhashes: vfs_rmdir and 
vfs_rename_dir both call dentry_unhash(), which is there to make it easy 
for simple file systems to avoid races with directory removal and 
lookups.  That could arguably be pushed down into those file systems, 
but it's a more delicate cleanup.

Is the d_prune d_op a reasonable VFS interface extension?  Is it 
acceptable in its current form?

Thanks!
sage


See also
  git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git d_prune


Sage Weil (4):
  vfs: add d_prune dentry operation
  ceph: clear parent D_COMPLETE flag when on dentry prune
  ceph: compensate for dentry_unhash() calls in vfs_rmdir() and
    vfs_rename_dir()
  ceph: use new D_COMPLETE dentry flag

 Documentation/filesystems/Locking |    1 +
 fs/ceph/caps.c                    |    8 +--
 fs/ceph/dir.c                     |  110 ++++++++++++++++++++++++++++++++-----
 fs/ceph/inode.c                   |    9 +--
 fs/ceph/mds_client.c              |    6 +-
 fs/ceph/super.h                   |   23 +++++++-
 fs/dcache.c                       |    8 +++
 include/linux/dcache.h            |    3 +
 8 files changed, 139 insertions(+), 29 deletions(-)



^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown), 
@ 2011-03-01  9:48 Saurav Lahiri
  0 siblings, 0 replies; 71+ messages in thread
From: Saurav Lahiri @ 2011-03-01  9:48 UTC (permalink / raw)
  To: ceph-devel

subscribe

^ permalink raw reply	[flat|nested] 71+ messages in thread
* (unknown)
@ 2010-09-24 17:10 Important Notice
  0 siblings, 0 replies; 71+ messages in thread
From: Important Notice @ 2010-09-24 17:10 UTC (permalink / raw)



You have won 1,000, 000.00GBPS. Send NAme,Address,Tel

^ permalink raw reply	[flat|nested] 71+ messages in thread

end of thread, other threads:[~2020-07-22  4:56 UTC | newest]

Thread overview: 71+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-03-09 16:48 (unknown) Joshua Schmid
  -- strict thread matches above, loose matches on Subject: below --
2020-07-22  4:45 (unknown) Darlehen Bedienung
2020-06-27 21:58 (unknown) lookman joe
2020-03-27  9:20 (unknown) chenanqing
2020-03-27  8:36 (unknown) chenanqing
2020-03-05 10:47 (unknown) Juanito S. Galang
2018-02-02 12:15 (unknown), Robert Vasek
2018-01-23 13:36 (unknown), Mr Sheng Li Hung
2017-12-24  2:58 (unknown), 柯弼舜
2017-12-23 15:32 (unknown), 柯弼舜
2017-11-20  2:36 (unknown), Robert Wang
2017-08-23  7:23 (unknown), Xuehan Xu
2017-08-11  4:59 (unknown), Administrator
2017-08-02  4:12 (unknown), Administrator
2017-07-27  2:16 (unknown) ceph-devel
2017-07-27  2:14 (unknown) ceph-devel
2017-06-28  3:22 (unknown), Administrator
2017-06-23  6:09 (unknown), Administrator
2017-04-04 19:31 (unknown), Kristi Nikolla
2017-03-28 21:11 (unknown), George Papadrosou
2017-03-23 17:07 (unknown), Ning Yao
2017-03-02 15:55 (unknown), nikita.gerasimov
2017-02-23 14:31 (unknown), tao chang
2017-02-16 20:59 (unknown), Qin's Yanjun
2016-10-11  3:08 (unknown), 李 剑宇
2016-08-30  3:08 (unknown), 耿航
2016-06-06  9:27 (unknown), changtao
2016-05-11  8:58 (unknown), 杨振兴
2016-05-11  2:42 (unknown), Jiankun Yu
     [not found] <[PATCH 0/2] ceph osd: initial VMware VAAI support>
2016-03-10  6:34 ` (unknown), Mike Christie
2016-03-08  1:44 (unknown), 王少辉
2016-02-29  3:50 (unknown), 鼎张
2015-11-23  2:15 (unknown), Dong Wu
2015-11-17  7:34 (unknown), 1990 Self
2015-11-13 17:00 (unknown), Guang Yang
2015-10-20  8:10 (unknown), maillist_linux
2015-10-06 21:25 (unknown), Aakanksha Pudipeddi-SSI
2015-09-22 12:57 (unknown), Redynk, Lukasz
2015-05-16 10:42 (unknown), Haomai Wang
2015-03-11  2:43 (unknown), Andrew Shewmaker
     [not found] <491603614.2711416.1420447432492.JavaMail.yahoo@jws100153.mail.ne1.yahoo.com>
     [not found] ` <791409998.2699745.1420449010721.JavaMail.yahoo@jws10094.mail.ne1.yahoo.com>
     [not found]   ` <2097467700.2731314.1420449474531.JavaMail.yahoo@jws10033.mail.ne1.yahoo.com>
     [not found]     ` <240226799.2727140.1420449969125.JavaMail.yahoo@jws10067.mail.ne1.yahoo.com>
     [not found]       ` <1939115871.2753921.1420457274798.JavaMail.yahoo@jws10085.mail.ne1.yahoo.com>
     [not found]         ` <1994957961.2730912.1420457695514.JavaMail.yahoo@jws10056.mail.ne1.yahoo.com>
     [not found]           ` <1680079570.2743238.1420458170796.JavaMail.yahoo@jws100211.mail.ne1.yahoo.com>
     [not found]             ` <1659658722.2746194.1420458683277.JavaMail.yahoo@jws10057.mail.ne1.yahoo.com>
     [not found]               ` <676811817.2753620.1420459085042.JavaMail.yahoo@jws100122.mail.ne1.yahoo.com>
     [not found]                 ` <292828112.2728941.1420459779830.JavaMail.yahoo@jws10062.mail.ne1.yahoo.com>
     [not found]                   ` <175440141 0.2748317.1420460266117.JavaMail.yahoo@jws10086.mail.ne1.yahoo.com>
     [not found]                     ` <1860808438.2753482.1420460950827.JavaMail.yahoo@jws100138.mail.ne1.yahoo.com>
     [not found]                       ` <509045745.2731669.1420461374473.JavaMail.yahoo@jws10063.mail.ne1.yahoo.com>
     [not found]                         ` <2027814345.1724383.1420461875563.JavaMail.yahoo@jws10002g.mail.ne1.yahoo.com>
     [not found]                           ` <1968321735.2759124.1420462309628.JavaMail.yahoo@jws100110.mail.ne1.yahoo.com>
     [not found]                             ` <463667213.2418879.1420462687894.JavaMail.yahoo@jws10077.mail.ne1.yahoo.com>
     [not found]                               ` <1379589304.738623.1420475580420.JavaMail.yahoo@jws100105.mail.ne1.yahoo.com>
     [not found]                                 ` <755691156.2814422.1420475746824.JavaMail.yahoo@jws10032.mail.ne1.yahoo.com>
     [not found]                                   ` <472406141.2821728.1420476367350.JavaMail.yahoo@jws100120.mail.ne1.yahoo.com>
     [not found]                                     ` <221715421.2841474.1420477779861.JavaMail.yahoo@jws100168.mail.ne1.yahoo.com>
     [not found]                                       ` <254713802.2842121.142047836 5372.JavaMail.yahoo@jws10038.mail.ne1.yahoo.com>
     [not found]                                         ` <1464384704.2848608.1420478734002.JavaMail.yahoo@jws100207.mail.ne1.yahoo.com>
     [not found]                                           ` <4065663.2836254.1420480471129.JavaMail.yahoo@jws10042.mail.ne1.yahoo.com>
     [not found]                                             ` <1320938856.2854165.1420481033791.JavaMail.yahoo@jws100151.mail.ne1.yahoo.com>
     [not found]                                               ` <1135342988.2849228.1420481646955.JavaMail.yahoo@jws100179.mail.ne1.yahoo.com>
     [not found]                                                 ` <2074478185.2848501.1420482268248.JavaMail.yahoo@jws10082.mail.ne1.yahoo.com>
     [not found]                                                   ` <1712723305.2866482.1420482691782.JavaMail.yahoo@jws100183.mail.ne1.yahoo.com>
     [not found]                                                     ` <252627126.2892667.1420488068173.JavaMail.yahoo@jws10029.mail.ne1.yahoo.com>
     [not found]                                                       ` <1065493696.2895450.1420488484721.JavaMail.yahoo@jws10076.mail.ne1.yahoo.com>
     [not found]                                                         ` <1203194643.2918350.1420491573271.JavaMail.yahoo@jws100108.mail.ne1.yahoo.com>
     [not found]                                                           ` <1046709736.2914723.1420491997552.JavaMail.yah oo@jws10054.mail.ne1.yahoo.com>
     [not found]                                                             ` <1995371219.2936216.1420492558117.JavaMail.yahoo@jws100142.mail.ne1.yahoo.com>
     [not found]                                                               ` <2001229887.2923985.1420493570028.JavaMail.yahoo@jws100125.mail.ne1.yahoo.com>
     [not found]                                                                 ` <348533999.2937953.1420493983611.JavaMail.yahoo@jws10053.mail.ne1.yahoo.com>
     [not found]                                                                   ` <1340159510.32526.1420519464410.JavaMail.yahoo@jws10081.mail.ne1.yahoo.com>
     [not found]                                                                     ` <962476274.1899010.1420521106577.JavaMail.yahoo@jws10001g.mail.ne1.yahoo.com>
     [not found]                                                                       ` <1458665553.3048449.1420521913960.JavaMail.yahoo@jws100107.mail.ne1.yahoo.com>
     [not found]                                                                         ` <1012235344.3036312.1420522405591.JavaMail.yahoo@jws100104.mail.ne1.yahoo.com>
     [not found]                                                                           ` <1547653530.3020536.1420523662414.JavaMail.yahoo@jws10040.mail.ne1.yahoo.com>
     [not found]                                                                             ` <1790092626.3029305.1420524241447.JavaMail.yahoo@jws100209.mail.ne1.yahoo.com>
     [not found]                                                                               ` <2092522293.3046570.1420524984091.JavaMail.yahoo@jws100135.mai l.ne1.yahoo.com>
     [not found]                                                                                 ` <1252881721.3050693.1420525551385.JavaMail.yahoo@jws100140.mail.ne1.yahoo.com>
     [not found]                                                                                   ` <51065461.3051024.1420526361766.JavaMail.yahoo@jws10035.mail.ne1.yahoo.com>
     [not found]                                                                                     ` <2088659211.3041886.1420527084737.JavaMail.yahoo@jws10091.mail.ne1.yahoo.com>
     [not found]                                                                                       ` <1146750372.3050740.1420527826885.JavaMail.yahoo@jws100148.mail.ne1.yahoo.com>
     [not found]                                                                                         ` <200807348.3052863.1420529187986.JavaMail.yahoo@jws100165.mail.ne1.yahoo.com>
     [not found]                                                                                           ` <318130978.3851172.1420732530471.JavaMail.yahoo@jws10041.mail.ne1.yahoo.com>
     [not found]                                                                                             ` <1581418323.3896621.1420740180883.JavaMail.yahoo@jws100132.mail.ne1.yahoo.com>
     [not found]                                                                                               ` <1141109426.3909408.1420741083576.JavaMail.yahoo@jws100165.mail.ne1.yahoo.com>
     [not found]                                                                                                 ` <319184965.3917235.1420743616835.JavaMail.yahoo@jws10029.mail.ne1.yahoo.com>
     [not found]                                                                                                   ` <324427432.3992904.1420755522887.JavaMail.yahoo@jws100134.mail.ne1.yahoo.com>
     [not found]                                                                                                     ` <1454203836.464995.1421052471046.JavaMail.yahoo@jws10038.mail.ne1.yahoo.com>
2015-01-12 18:09                                                                                                       ` (unknown) MR. MORGAN W. THABO
2014-12-13  1:16 (unknown) wanglin
     [not found] <1570038211.167595.1414613146892.JavaMail.yahoo@jws10056.mail.ne1.yahoo.com>
     [not found] ` <1835234304.171617.1414613165674.JavaMail.yahoo@jws10089.mail.ne1.yahoo.com>
     [not found]   ` <1938862685.172387.1414613200459.JavaMail.yahoo@jws100180.mail.ne1.yahoo.com>
     [not found]     ` <705402329.170339.1414613213653.JavaMail.yahoo@jws10087.mail.ne1.yahoo.com>
     [not found]       ` <760168749.169371.1414613227586.JavaMail.yahoo@jws10082.mail.ne1.yahoo.com>
     [not found]         ` <1233923671.167957.1414613439879.JavaMail.yahoo@jws10091.mail.ne1.yahoo.com>
     [not found]           ` <925985882.172122.1414613520734.JavaMail.yahoo@jws100207.mail.ne1.yahoo.com>
     [not found]             ` <1216694778.172990.1414613570775.JavaMail.yahoo@jws100152.mail.ne1.yahoo.com>
     [not found]               ` <1213035306.169838.1414613612716.JavaMail.yahoo@jws10097.mail.ne1.yahoo.com>
     [not found]                 ` <2058591563.172973.1414613668636.JavaMail.yahoo@jws10089.mail.ne1.yahoo.com>
     [not found]                   ` <1202030640.175493 .1414613712352.JavaMail.yahoo@jws10036.mail.ne1.yahoo.com>
     [not found]                     ` <1111049042.175610.1414613739099.JavaMail.yahoo@jws100165.mail.ne1.yahoo.com>
     [not found]                       ` <574125160.175950.1414613784216.JavaMail.yahoo@jws100158.mail.ne1.yahoo.com>
     [not found]                         ` <1726966600.175552.1414613846198.JavaMail.yahoo@jws100190.mail.ne1.yahoo.com>
     [not found]                           ` <976499752.219775.1414613888129.JavaMail.yahoo@jws100101.mail.ne1.yahoo.com>
     [not found]                             ` <1400960529.171566.1414613936238.JavaMail.yahoo@jws10059.mail.ne1.yahoo.com>
     [not found]                               ` <1333619289.175040.1414613999304.JavaMail.yahoo@jws100196.mail.ne1.yahoo.com>
     [not found]                                 ` <1038759122.176173.1414614054070.JavaMail.yahoo@jws100138.mail.ne1.yahoo.com>
     [not found]                                   ` <1109995533.176150.1414614101940.JavaMail.yahoo@jws100140.mail.ne1.yahoo.com>
     [not found]                                     ` <809474730.174920.1414614143971.JavaMail.yahoo@jws100154.mail.ne1.yahoo.com>
     [not found]                                       ` <1234226428.170349.1414614189490.JavaMail .yahoo@jws10056.mail.ne1.yahoo.com>
     [not found]                                         ` <1122464611.177103.1414614228916.JavaMail.yahoo@jws100161.mail.ne1.yahoo.com>
     [not found]                                           ` <1350859260.174219.1414614279095.JavaMail.yahoo@jws100176.mail.ne1.yahoo.com>
     [not found]                                             ` <1730751880.171557.1414614322033.JavaMail.yahoo@jws10060.mail.ne1.yahoo.com>
     [not found]                                               ` <642429550.177328.1414614367628.JavaMail.yahoo@jws100165.mail.ne1.yahoo.com>
     [not found]                                                 ` <1400780243.20511.1414614418178.JavaMail.yahoo@jws100162.mail.ne1.yahoo.com>
     [not found]                                                   ` <2025652090.173204.1414614462119.JavaMail.yahoo@jws10087.mail.ne1.yahoo.com>
     [not found]                                                     ` <859211720.180077.1414614521867.JavaMail.yahoo@jws100147.mail.ne1.yahoo.com>
     [not found]                                                       ` <258705675.173585.1414614563057.JavaMail.yahoo@jws10078.mail.ne1.yahoo.com>
     [not found]                                                         ` <1773234186.173687.1414614613736.JavaMail.yahoo@jws10078.mail.ne1.yahoo.com>
     [not found]                                                           ` <1132079010.173033.1414614645153.JavaMail.yahoo@jws10066.mail.ne1.ya hoo.com>
     [not found]                                                             ` <1972302405.176488.1414614708676.JavaMail.yahoo@jws100166.mail.ne1.yahoo.com>
     [not found]                                                               ` <1713123000.176308.1414614771694.JavaMail.yahoo@jws10045.mail.ne1.yahoo.com>
     [not found]                                                                 ` <299800233.173413.1414614817575.JavaMail.yahoo@jws10066.mail.ne1.yahoo.com>
     [not found]                                                                   ` <494469968.179875.1414614903152.JavaMail.yahoo@jws100144.mail.ne1.yahoo.com>
     [not found]                                                                     ` <2136945987.171995.1414614942776.JavaMail.yahoo@jws10091.mail.ne1.yahoo.com>
     [not found]                                                                       ` <257674219.177708.1414615022592.JavaMail.yahoo@jws100181.mail.ne1.yahoo.com>
     [not found]                                                                         ` <716927833.181664.1414615075308.JavaMail.yahoo@jws100145.mail.ne1.yahoo.com>
     [not found]                                                                           ` <874940984.178797.1414615132802.JavaMail.yahoo@jws100157.mail.ne1.yahoo.com>
     [not found]                                                                             ` <1283488887.176736.1414615187657.JavaMail.yahoo@jws100183.mail.ne1.yahoo.com>
     [not found]                                                                               ` <777665713.175887.1414615236293.JavaMail.yahoo@jws10083.mail.ne1.yahoo.com>
     [not found]                                                                                 ` <585395776.176325.1 414615298260.JavaMail.yahoo@jws10033.mail.ne1.yahoo.com>
     [not found]                                                                                   ` <178352191.221832.1414615355071.JavaMail.yahoo@jws100104.mail.ne1.yahoo.com>
     [not found]                                                                                     ` <108454213.176606.1414615522058.JavaMail.yahoo@jws10053.mail.ne1.yahoo.com>
     [not found]                                                                                       ` <1617229176.177502.1414615563724.JavaMail.yahoo@jws10030.mail.ne1.yahoo.com>
     [not found]                                                                                         ` <324334617.178254.1414615625247.JavaMail.yahoo@jws10089.mail.ne1.yahoo.com>
     [not found]                                                                                           ` <567135865.82376.1414615664442.JavaMail.yahoo@jws100136.mail.ne1.yahoo.com>
     [not found]                                                                                             ` <764758300.179669.1414615711821.JavaMail.yahoo@jws100107.mail.ne1.yahoo.com>
     [not found]                                                                                               ` <1072855470.183388.1414615775798.JavaMail.yahoo@jws100147.mail.ne1.yahoo.com>
     [not found]                                                                                                 ` <2134283632.173314.1414615831322.JavaMail.yahoo@jws10094.mail.ne1.yahoo.com>
     [not found]                                                                                                   ` <1454491902.178612.1414615875076.JavaMail.yahoo@jws100209.mail.ne1.yahoo.com>
     [not found]                                                                                                     ` <2080680570.142780.1414957455504.JavaMail.yahoo@jws10049.mail.ne1.yahoo.com>
2014-11-02 19:47                                                                                                       ` (unknown) MRS GRACE MANDA
2014-10-27  0:06 (unknown), Logan Vig
2014-10-04 17:45 (unknown) Andrew Gaul
2014-06-13 10:18 (unknown), Mrs Teresa AU
2014-05-19  5:35 (unknown) Songjiang Zhao
2014-04-16 14:58 (unknown), Ilya Storozhilov
2014-01-15 12:00 (unknown), Elite Homes
2014-01-12  3:13 (unknown) Songjiang Zhao
2013-10-22 12:05 (unknown), COMPANY
2013-06-10  9:53 (unknown) Ta Ba Tuan
2013-05-30  1:38 (unknown) Ta Ba Tuan
     [not found] <19392198-aa95-4c84-ac5c-ef496bb3cedc@mail-1.enovance.com>
2012-12-10 17:30 ` (unknown), Alexandre Maumené
2012-10-23  4:12 (unknown), jie sun
2012-09-26  6:50 (unknown), Fabian.Eichstaedt
2012-07-24 18:47 (unknown), Mitch Anderson
2012-07-09  1:00 (unknown), HarmeekSingh Bedi
2012-05-25 11:18 (unknown), robothroli company
2012-05-25  5:30 (unknown), Nam Dang
2012-05-08  0:54 (unknown), Tim Flavin
2012-03-19  1:39 (unknown) 佐々木 喜徳
2012-02-04  4:46 (unknown), Masuko Tomoya
2011-11-16 19:41 (unknown), Wido den Hollander
2011-10-18  3:08 (unknown), Kota Hada
2011-07-21 17:22 (unknown) Western Union®
2011-05-21 12:54 (unknown), western101@algish.com
2011-05-03 16:05 (unknown), ken leo
2011-03-22  0:48 (unknown), Sage Weil
2011-03-01  9:48 (unknown), Saurav Lahiri
2010-09-24 17:10 (unknown) Important Notice

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox