* Re:
@ 2011-08-15 23:01 jeffrice
0 siblings, 0 replies; 19+ messages in thread
From: jeffrice @ 2011-08-15 23:01 UTC (permalink / raw)
I have a business proposal for you worth 7.5Million Great British Pound
Sterling's. If you are interested, please send a response.
Best regards,
Jeff Rice
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
2012-05-08 0:54 (unknown), Tim Flavin
@ 2012-05-17 21:10 ` Josh Durgin
0 siblings, 0 replies; 19+ messages in thread
From: Josh Durgin @ 2012-05-17 21:10 UTC (permalink / raw)
To: Tim Flavin; +Cc: ceph-devel
On 05/07/2012 05:54 PM, Tim Flavin wrote:
> The new site is great! I like the Ceph documentation, however I found
> a couple of typos. Is this the best place address them? (Some of the
> apparent typos may be my not understanding what is going on.)
>
>
>
> http://ceph.com/docs/master/config-cluster/ceph-conf/
>
> The "Hardware Recommendations" link near the bottom of the page gives
> a 404. Did you want to point to
> http://ceph.com/docs/master/install/hardware-recommendations/ ?
>
>
> http://ceph.com/docs/master/config-ref/osd-config
>
> For "osd client message size cap" The default value is 500 MB but
> the description lists it a 200 MB.
>
>
> http://ceph.com/docs/master/api/librbdpy/
>
> The line of code: "size = 4 * 1024 * 1024 # 4 GiB" appears to be
> missing a * 1024, and the next line
> is "rbd_inst.create('myimage', 4)" when it probably should be
> "rbd_inst.create('myimage', size)" This is repeated several times.
Thanks for the notes - I've fixed these in the master branch.
All the docs are in git under the doc directory - if you find other
problems, feel free to send a patch or a github pull request. You can
even edit it in a browser on github if you like.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
2012-07-01 20:22 ` Chuanyu
@ 2012-07-02 9:35 ` Chuanyu Tsai
0 siblings, 0 replies; 19+ messages in thread
From: Chuanyu Tsai @ 2012-07-02 9:35 UTC (permalink / raw)
To: ceph-devel
Chuanyu <chuanyu <at> cs.nctu.edu.tw> writes:
> Hi Yehuda, Florian,
>
> I follow the wiki, and steps which you discussed,
> construct my ceph system with rados gateway,
> and I can use libs3 to upload file via radosgw, (thanks a lot!)
> but got "405 Method Not Allowed" when I use swift,
>
> $ swift -v -A http://s3.paca.tw:80/auth -U paca:paca1 -K
> UoJO4nFgdAoX+9nEftElIY+AMmDIkcrUBkycNKPA stat
> Auth GET failed: http://s3.paca.tw:80/auth/tokens 405 Method Not Allowed
>
> ( Because there has no test step on wiki,
> I follow the Florian's question, and guess the test command is above ?!)
>
> my radosgw-admin config:
> $ radosgw-admin user info --uid=paca
> { "user_id": "paca",
> "rados_uid": 0,
> "display_name": "chuanyu",
> "email": "chuanyu <at> cs.nctu.edu.tw",
> "suspended": 0,
> "subusers": [
> { "id": "paca:paca1",
> "permissions": "full-control"}],
I've correct the permissions problem, thanks Florian!
> "keys": [
> { "user": "paca",
> "access_key": "DS932H4EI9HK7I1CTDNF",
> "secret_key": "Rn\/5FqHzRPZFN6f9R\/LuTqvG0AYjbHtrurrGydVk"}],
> "swift_keys": [
> { "user": "paca:paca1",
> "secret_key": "UoJO4nFgdAoX+9nEftElIY+AMmDIkcrUBkycNKPA"}]}
>
> ceph.conf:
> [client.radosgw.gateway]
> host = volume
> keyring = /etc/ceph/keyring/radosgw.gateway.keyring
> rgw socket path = /var/run/ceph/rgw.sock
> log file = ""
> syslog = true
> debug rgw = 20
>
> my log:
> http://pastebin.com/rhGhATmv
Hi,
I've noticed that the log shows I'm using *POST* method to getting op?
req 9:0.000277:swift-auth:POST /auth/tokens::getting op
But the code shows I'll always get NULL return
/ceph/src/rgw/rgw_swift_auth.cc:239
239 RGWOp *RGWHandler_SWIFT_Auth::get_op()
240 {
241 RGWOp *op;
242 switch (s->op) {
243 case OP_GET:
244 op = &rgw_swift_auth_get;
245 break;
246 default:
247 return NULL;
248 }
So 405 error occurs,
/ceph/src/rgw/rgw_main.cc:273
273 req->log(s, "getting op");
274 op = handler->get_op();
275 if (!op) {
276 abort_early(s, -ERR_METHOD_NOT_ALLOWED);
277 goto done;
My swift version (Version: 1.4.8-0ubuntu2, Ubuntu 12.04)
$ swift --version
swift 1.0
Does the version mismatch, or something else goes wrong?
I'll try curl connection directly later,
Thanks!
Chuanyu Tsai.
>
> Any advice would be appreciate!
> Tthanks,
> Chuanyu
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
2012-10-23 4:12 (unknown), jie sun
@ 2012-10-23 11:50 ` Wido den Hollander
2012-10-24 5:48 ` Re: jie sun
0 siblings, 1 reply; 19+ messages in thread
From: Wido den Hollander @ 2012-10-23 11:50 UTC (permalink / raw)
To: jie sun; +Cc: ceph-devel
On 10/23/2012 06:12 AM, jie sun wrote:
> Hi,
>
> I created and mounted a rbd for a virtual machine. And it can be used
> as a block device normally, but often prompt some log like below:
Could you provide us a bit more information?
What kernel are you using?
What does "ceph -s" show you?
Are you running KVM virtual machines and connecting /dev/rbd0 as a
device to the virtual machine?
Wido
> "Oct 23 10:30:22 ubuntu12 kernel: [321506.941606] libceph: osd3
> 10.100.211.146:6810 socket closed
> Oct 23 10:30:59 ubuntu12 kernel: [321544.337856] libceph: osd9
> 10.100.211.68:6809 socket closed
> Oct 23 10:45:22 ubuntu12 kernel: [322407.233090] libceph: osd3
> 10.100.211.146:6810 socket closed
> Oct 23 10:45:59 ubuntu12 kernel: [322444.766796] libceph: osd9
> 10.100.211.68:6809 socket closed
> Oct 23 11:00:22 ubuntu12 kernel: [323307.529098] libceph: osd3
> 10.100.211.146:6810 socket closed
> Oct 23 11:01:00 ubuntu12 kernel: [323345.241679] libceph: osd9
> 10.100.211.68:6809 socket closed
> Oct 23 11:15:22 ubuntu12 kernel: [324207.821113] libceph: osd3
> 10.100.211.146:6810 socket closed
> Oct 23 11:16:00 ubuntu12 kernel: [324245.717747] libceph: osd9
> 10.100.211.68:6809 socket closed
> Oct 23 11:17:01 ubuntu12 CRON[10529]: (root) CMD ( cd / && run-parts
> --report /etc/cron.hourly)
> Oct 23 11:30:23 ubuntu12 kernel: [325108.117134] libceph: osd3
> 10.100.211.146:6810 socket closed"
>
> These log also can be found in "/var/log/syslog".
> I google something about this problem,but didn't understand what
> you've wrote in "http://tracker.newdream.net/issues/2260" exactly.
> How can I resolve this problem? My ceph version is 0.48.
> Should I change some files, or modify some content of some file?
>
> Thank you !
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
2012-10-23 11:50 ` Wido den Hollander
@ 2012-10-24 5:48 ` jie sun
2012-10-24 5:58 ` Re: Gregory Farnum
0 siblings, 1 reply; 19+ messages in thread
From: jie sun @ 2012-10-24 5:48 UTC (permalink / raw)
To: Wido den Hollander; +Cc: ceph-devel
My vm kernel version is "Linux ubuntu12 3.2.0-23-generic".
"ceph-s" shows
" health HEALTH_OK
monmap e1: 1 mons at {a=10.100.211.146:6789/0}, election epoch 0, quorum 0 a
osdmap e152: 10 osds: 9 up, 9 in
pgmap v48479: 2112 pgs: 2112 active+clean; 23161 MB data, 46323 MB
used, 2451 GB / 2514 GB avail
mdsmap e31: 1/1/1 up {0=a=up:active} "
In my vm, I do operations like:
I install 4 debs on my vm, such as libnss3, libnspr4, librados2,
librbd1. And then execute "modprobe rbd" so that I can map a image to
my vm.
Then "rbd create foo --size 10240 -m $monIP(my ceph mon IP)",
"rbd map foo -m $monIP" ------ Here a device /dev/rbd0 can be
used as a local device
"mkfs -t ext4 /dev/rbd0"
"mount /dev/rbd0 /mnt(or some other directory)"
After the operations above, I can use this device. But it oftern
prompt some log like "libceph: osd9 10.100.211.68:6809 socket closed".
I just want to mount a device to my vm, so I didn't install a ceph
client. Is this proper to do so?
Thank you for youer answer!
2012/10/23 Wido den Hollander <wido@widodh.nl>:
> On 10/23/2012 06:12 AM, jie sun wrote:
>>
>> Hi,
>>
>> I created and mounted a rbd for a virtual machine. And it can be used
>> as a block device normally, but often prompt some log like below:
>
>
> Could you provide us a bit more information?
>
> What kernel are you using?
>
> What does "ceph -s" show you?
>
> Are you running KVM virtual machines and connecting /dev/rbd0 as a device to
> the virtual machine?
>
> Wido
>
>> "Oct 23 10:30:22 ubuntu12 kernel: [321506.941606] libceph: osd3
>> 10.100.211.146:6810 socket closed
>> Oct 23 10:30:59 ubuntu12 kernel: [321544.337856] libceph: osd9
>> 10.100.211.68:6809 socket closed
>> Oct 23 10:45:22 ubuntu12 kernel: [322407.233090] libceph: osd3
>> 10.100.211.146:6810 socket closed
>> Oct 23 10:45:59 ubuntu12 kernel: [322444.766796] libceph: osd9
>> 10.100.211.68:6809 socket closed
>> Oct 23 11:00:22 ubuntu12 kernel: [323307.529098] libceph: osd3
>> 10.100.211.146:6810 socket closed
>> Oct 23 11:01:00 ubuntu12 kernel: [323345.241679] libceph: osd9
>> 10.100.211.68:6809 socket closed
>> Oct 23 11:15:22 ubuntu12 kernel: [324207.821113] libceph: osd3
>> 10.100.211.146:6810 socket closed
>> Oct 23 11:16:00 ubuntu12 kernel: [324245.717747] libceph: osd9
>> 10.100.211.68:6809 socket closed
>> Oct 23 11:17:01 ubuntu12 CRON[10529]: (root) CMD ( cd / && run-parts
>> --report /etc/cron.hourly)
>> Oct 23 11:30:23 ubuntu12 kernel: [325108.117134] libceph: osd3
>> 10.100.211.146:6810 socket closed"
>>
>> These log also can be found in "/var/log/syslog".
>> I google something about this problem,but didn't understand what
>> you've wrote in "http://tracker.newdream.net/issues/2260" exactly.
>> How can I resolve this problem? My ceph version is 0.48.
>> Should I change some files, or modify some content of some file?
>>
>> Thank you !
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
2012-10-24 5:48 ` Re: jie sun
@ 2012-10-24 5:58 ` Gregory Farnum
[not found] ` <CAB6Jr7SbbAE=yEVgg+UupTmavKfvFvGj8j7C9M0Ya2FocNmw9w@mail.gmail.com>
0 siblings, 1 reply; 19+ messages in thread
From: Gregory Farnum @ 2012-10-24 5:58 UTC (permalink / raw)
To: jie sun; +Cc: Wido den Hollander, ceph-devel
On Tuesday, October 23, 2012 at 10:48 PM, jie sun wrote:
> My vm kernel version is "Linux ubuntu12 3.2.0-23-generic".
>
> "ceph-s" shows
> " health HEALTH_OK
> monmap e1: 1 mons at {a=10.100.211.146:6789/0}, election epoch 0, quorum 0 a
> osdmap e152: 10 osds: 9 up, 9 in
> pgmap v48479: 2112 pgs: 2112 active+clean; 23161 MB data, 46323 MB
> used, 2451 GB / 2514 GB avail
> mdsmap e31: 1/1/1 up {0=a=up:active} "
>
> In my vm, I do operations like:
> I install 4 debs on my vm, such as libnss3, libnspr4, librados2,
> librbd1. And then execute "modprobe rbd" so that I can map a image to
> my vm.
> Then "rbd create foo --size 10240 -m $monIP(my ceph mon IP)",
> "rbd map foo -m $monIP" ------ Here a device /dev/rbd0 can be
> used as a local device
> "mkfs -t ext4 /dev/rbd0"
> "mount /dev/rbd0 /mnt(or some other directory)"
> After the operations above, I can use this device. But it oftern
> prompt some log like "libceph: osd9 10.100.211.68:6809 socket closed".
> I just want to mount a device to my vm, so I didn't install a ceph
> client. Is this proper to do so?
You might consider using the native QEMU/libvirt instead; it offers some more advanced options. But if you're happy with it, this certainly works!
The "socket closed" messages are just noise; it's nothing to be concerned about (you'll notice they're happening every 15 minutes for each OSD; probably you aren't doing any disk accesses). I think these warnings actually got removed from our master branch a few days ago.
-Greg
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
[not found] ` <CAB6Jr7SbbAE=yEVgg+UupTmavKfvFvGj8j7C9M0Ya2FocNmw9w@mail.gmail.com>
@ 2012-10-25 12:15 ` Gregory Farnum
2012-10-25 14:36 ` Re: Alex Elder
2012-10-26 3:08 ` Re: jie sun
0 siblings, 2 replies; 19+ messages in thread
From: Gregory Farnum @ 2012-10-25 12:15 UTC (permalink / raw)
To: jie sun; +Cc: ceph-devel
Sorry, I was unclear — I meant I think[1] it was fixed in our linux branch, for future kernel releases. The messages you're seeing are just logging a perfectly normal event that's part of the Ceph protocol.
-Greg
[1]: I'd have to check to make sure. Sage, Alex, am I remembering that correctly?
On Wednesday, October 24, 2012 at 11:45 PM, jie sun wrote:
> What is the version of the master branch ? I use the stable version 0.48.2
> Thank you!
> -SunJie
>
> 2012/10/24 Gregory Farnum <greg@inktank.com>:
> > On Tuesday, October 23, 2012 at 10:48 PM, jie sun wrote:
> > > My vm kernel version is "Linux ubuntu12 3.2.0-23-generic".
> > >
> > > "ceph-s" shows
> > > " health HEALTH_OK
> > > monmap e1: 1 mons at {a=10.100.211.146:6789/0}, election epoch 0, quorum 0 a
> > > osdmap e152: 10 osds: 9 up, 9 in
> > > pgmap v48479: 2112 pgs: 2112 active+clean; 23161 MB data, 46323 MB
> > > used, 2451 GB / 2514 GB avail
> > > mdsmap e31: 1/1/1 up {0=a=up:active} "
> > >
> > > In my vm, I do operations like:
> > > I install 4 debs on my vm, such as libnss3, libnspr4, librados2,
> > > librbd1. And then execute "modprobe rbd" so that I can map a image to
> > > my vm.
> > > Then "rbd create foo --size 10240 -m $monIP(my ceph mon IP)",
> > > "rbd map foo -m $monIP" ------ Here a device /dev/rbd0 can be
> > > used as a local device
> > > "mkfs -t ext4 /dev/rbd0"
> > > "mount /dev/rbd0 /mnt(or some other directory)"
> > > After the operations above, I can use this device. But it oftern
> > > prompt some log like "libceph: osd9 10.100.211.68:6809 socket closed".
> > > I just want to mount a device to my vm, so I didn't install a ceph
> > > client. Is this proper to do so?
> >
> >
> >
> > You might consider using the native QEMU/libvirt instead; it offers some more advanced options. But if you're happy with it, this certainly works!
> >
> > The "socket closed" messages are just noise; it's nothing to be concerned about (you'll notice they're happening every 15 minutes for each OSD; probably you aren't doing any disk accesses). I think these warnings actually got removed from our master branch a few days ago.
> > -Greg
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
2012-10-25 12:15 ` Re: Gregory Farnum
@ 2012-10-25 14:36 ` Alex Elder
2012-10-25 15:38 ` Re: Sage Weil
2012-10-26 3:08 ` Re: jie sun
1 sibling, 1 reply; 19+ messages in thread
From: Alex Elder @ 2012-10-25 14:36 UTC (permalink / raw)
To: Gregory Farnum; +Cc: jie sun, ceph-devel
On 10/25/2012 07:15 AM, Gregory Farnum wrote:
> Sorry, I was unclear — I meant I think[1] it was fixed in our linux
> branch, for future kernel releases. The messages you're seeing are
> just logging a perfectly normal event that's part of the Ceph
> protocol. -Greg [1]: I'd have to check to make sure. Sage, Alex, am I
> remembering that correctly?
I see those too. I think the socket (the other end?) closes after
a period of inactivity, but it does re-open and reconnect again
whenever necessary so that should really be fine.
The messages have not gone away yet, I personally think they should.
They originate from here, in net/ceph/messenger.c:
static void ceph_fault(struct ceph_connection *con)
__releases(con->mutex)
{
pr_err("%s%lld %s %s\n", ENTITY_NAME(con->peer_name),
ceph_pr_addr(&con->peer_addr.in_addr), con->error_msg)
Perhaps this should become pr_info() or something. Sage?
-Alex
> On Wednesday, October 24, 2012 at 11:45 PM, jie sun wrote:
>
>> What is the version of the master branch ? I use the stable version
>> 0.48.2 Thank you! -SunJie
>>
>> 2012/10/24 Gregory Farnum <greg@inktank.com>:
>>> On Tuesday, October 23, 2012 at 10:48 PM, jie sun wrote:
>>>> My vm kernel version is "Linux ubuntu12 3.2.0-23-generic".
>>>>
>>>> "ceph-s" shows " health HEALTH_OK monmap e1: 1 mons at
>>>> {a=10.100.211.146:6789/0}, election epoch 0, quorum 0 a osdmap
>>>> e152: 10 osds: 9 up, 9 in pgmap v48479: 2112 pgs: 2112
>>>> active+clean; 23161 MB data, 46323 MB used, 2451 GB / 2514 GB
>>>> avail mdsmap e31: 1/1/1 up {0=a=up:active} "
>>>>
>>>> In my vm, I do operations like: I install 4 debs on my vm, such
>>>> as libnss3, libnspr4, librados2, librbd1. And then execute
>>>> "modprobe rbd" so that I can map a image to my vm. Then "rbd
>>>> create foo --size 10240 -m $monIP(my ceph mon IP)", "rbd map
>>>> foo -m $monIP" ------ Here a device /dev/rbd0 can be used as a
>>>> local device "mkfs -t ext4 /dev/rbd0" "mount /dev/rbd0 /mnt(or
>>>> some other directory)" After the operations above, I can use
>>>> this device. But it oftern prompt some log like "libceph: osd9
>>>> 10.100.211.68:6809 socket closed". I just want to mount a
>>>> device to my vm, so I didn't install a ceph client. Is this
>>>> proper to do so?
>>>
>>>
>>>
>>> You might consider using the native QEMU/libvirt instead; it
>>> offers some more advanced options. But if you're happy with it,
>>> this certainly works!
>>>
>>> The "socket closed" messages are just noise; it's nothing to be
>>> concerned about (you'll notice they're happening every 15 minutes
>>> for each OSD; probably you aren't doing any disk accesses). I
>>> think these warnings actually got removed from our master branch
>>> a few days ago. -Greg
>>
>
>
>
> -- To unsubscribe from this list: send the line "unsubscribe
> ceph-devel" in the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
2012-10-25 14:36 ` Re: Alex Elder
@ 2012-10-25 15:38 ` Sage Weil
2012-10-25 21:28 ` Re: Dan Mick
0 siblings, 1 reply; 19+ messages in thread
From: Sage Weil @ 2012-10-25 15:38 UTC (permalink / raw)
To: Alex Elder; +Cc: Gregory Farnum, jie sun, ceph-devel
On Thu, 25 Oct 2012, Alex Elder wrote:
> On 10/25/2012 07:15 AM, Gregory Farnum wrote:
> > Sorry, I was unclear ? I meant I think[1] it was fixed in our linux
> > branch, for future kernel releases. The messages you're seeing are
> > just logging a perfectly normal event that's part of the Ceph
> > protocol. -Greg [1]: I'd have to check to make sure. Sage, Alex, am I
> > remembering that correctly?
>
> I see those too. I think the socket (the other end?) closes after
> a period of inactivity, but it does re-open and reconnect again
> whenever necessary so that should really be fine.
>
> The messages have not gone away yet, I personally think they should.
> They originate from here, in net/ceph/messenger.c:
>
> static void ceph_fault(struct ceph_connection *con)
> __releases(con->mutex)
> {
> pr_err("%s%lld %s %s\n", ENTITY_NAME(con->peer_name),
> ceph_pr_addr(&con->peer_addr.in_addr), con->error_msg)
>
> Perhaps this should become pr_info() or something. Sage?
Yeah, I think pr_info() is probably the right choice. Do you know if that
hits the console by default, or just dmesg/kern.log?
sage
>
> -Alex
>
> > On Wednesday, October 24, 2012 at 11:45 PM, jie sun wrote:
> >
> >> What is the version of the master branch ? I use the stable version
> >> 0.48.2 Thank you! -SunJie
> >>
> >> 2012/10/24 Gregory Farnum <greg@inktank.com>:
> >>> On Tuesday, October 23, 2012 at 10:48 PM, jie sun wrote:
> >>>> My vm kernel version is "Linux ubuntu12 3.2.0-23-generic".
> >>>>
> >>>> "ceph-s" shows " health HEALTH_OK monmap e1: 1 mons at
> >>>> {a=10.100.211.146:6789/0}, election epoch 0, quorum 0 a osdmap
> >>>> e152: 10 osds: 9 up, 9 in pgmap v48479: 2112 pgs: 2112
> >>>> active+clean; 23161 MB data, 46323 MB used, 2451 GB / 2514 GB
> >>>> avail mdsmap e31: 1/1/1 up {0=a=up:active} "
> >>>>
> >>>> In my vm, I do operations like: I install 4 debs on my vm, such
> >>>> as libnss3, libnspr4, librados2, librbd1. And then execute
> >>>> "modprobe rbd" so that I can map a image to my vm. Then "rbd
> >>>> create foo --size 10240 -m $monIP(my ceph mon IP)", "rbd map
> >>>> foo -m $monIP" ------ Here a device /dev/rbd0 can be used as a
> >>>> local device "mkfs -t ext4 /dev/rbd0" "mount /dev/rbd0 /mnt(or
> >>>> some other directory)" After the operations above, I can use
> >>>> this device. But it oftern prompt some log like "libceph: osd9
> >>>> 10.100.211.68:6809 socket closed". I just want to mount a
> >>>> device to my vm, so I didn't install a ceph client. Is this
> >>>> proper to do so?
> >>>
> >>>
> >>>
> >>> You might consider using the native QEMU/libvirt instead; it
> >>> offers some more advanced options. But if you're happy with it,
> >>> this certainly works!
> >>>
> >>> The "socket closed" messages are just noise; it's nothing to be
> >>> concerned about (you'll notice they're happening every 15 minutes
> >>> for each OSD; probably you aren't doing any disk accesses). I
> >>> think these warnings actually got removed from our master branch
> >>> a few days ago. -Greg
> >>
> >
> >
> >
> > -- To unsubscribe from this list: send the line "unsubscribe
> > ceph-devel" in the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
2012-10-25 15:38 ` Re: Sage Weil
@ 2012-10-25 21:28 ` Dan Mick
2012-10-25 22:15 ` Re: Alex Elder
0 siblings, 1 reply; 19+ messages in thread
From: Dan Mick @ 2012-10-25 21:28 UTC (permalink / raw)
To: Sage Weil; +Cc: Alex Elder, Gregory Farnum, jie sun, ceph-devel
>> static void ceph_fault(struct ceph_connection *con)
>> __releases(con->mutex)
>> {
>> pr_err("%s%lld %s %s\n", ENTITY_NAME(con->peer_name),
>> ceph_pr_addr(&con->peer_addr.in_addr), con->error_msg)
>>
>> Perhaps this should become pr_info() or something. Sage?
>
> Yeah, I think pr_info() is probably the right choice. Do you know if that
> hits the console by default, or just dmesg/kern.log?
pr_info is level 6, KERN_INFO; by default, /proc/sys/kernel/printk has
4 4 1 7 in it, the first 4 of which means 4-and-lower go to console. So
debug, info, notice messages all are options for "not console".
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
2012-10-25 21:28 ` Re: Dan Mick
@ 2012-10-25 22:15 ` Alex Elder
0 siblings, 0 replies; 19+ messages in thread
From: Alex Elder @ 2012-10-25 22:15 UTC (permalink / raw)
To: Dan Mick; +Cc: Sage Weil, Gregory Farnum, jie sun, ceph-devel
On 10/25/2012 04:28 PM, Dan Mick wrote:
>
>>> static void ceph_fault(struct ceph_connection *con)
>>> __releases(con->mutex)
>>> {
>>> pr_err("%s%lld %s %s\n", ENTITY_NAME(con->peer_name),
>>> ceph_pr_addr(&con->peer_addr.in_addr), con->error_msg)
>>>
>>> Perhaps this should become pr_info() or something. Sage?
>>
>> Yeah, I think pr_info() is probably the right choice. Do you know if
>> that
>> hits the console by default, or just dmesg/kern.log?
>
> pr_info is level 6, KERN_INFO; by default, /proc/sys/kernel/printk has
> 4 4 1 7 in it, the first 4 of which means 4-and-lower go to console. So
> debug, info, notice messages all are options for "not console".
>
Excellent. So pr_info() it is. Dan you want to implement this?
-Alex
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
2012-10-25 12:15 ` Re: Gregory Farnum
2012-10-25 14:36 ` Re: Alex Elder
@ 2012-10-26 3:08 ` jie sun
1 sibling, 0 replies; 19+ messages in thread
From: jie sun @ 2012-10-26 3:08 UTC (permalink / raw)
To: Gregory Farnum; +Cc: ceph-devel
I understood.Thank you.
-SunJie
2012/10/25 Gregory Farnum <greg@inktank.com>:
> Sorry, I was unclear — I meant I think[1] it was fixed in our linux branch, for future kernel releases. The messages you're seeing are just logging a perfectly normal event that's part of the Ceph protocol.
> -Greg
> [1]: I'd have to check to make sure. Sage, Alex, am I remembering that correctly?
>
>
> On Wednesday, October 24, 2012 at 11:45 PM, jie sun wrote:
>
>> What is the version of the master branch ? I use the stable version 0.48.2
>> Thank you!
>> -SunJie
>>
>> 2012/10/24 Gregory Farnum <greg@inktank.com>:
>> > On Tuesday, October 23, 2012 at 10:48 PM, jie sun wrote:
>> > > My vm kernel version is "Linux ubuntu12 3.2.0-23-generic".
>> > >
>> > > "ceph-s" shows
>> > > " health HEALTH_OK
>> > > monmap e1: 1 mons at {a=10.100.211.146:6789/0}, election epoch 0, quorum 0 a
>> > > osdmap e152: 10 osds: 9 up, 9 in
>> > > pgmap v48479: 2112 pgs: 2112 active+clean; 23161 MB data, 46323 MB
>> > > used, 2451 GB / 2514 GB avail
>> > > mdsmap e31: 1/1/1 up {0=a=up:active} "
>> > >
>> > > In my vm, I do operations like:
>> > > I install 4 debs on my vm, such as libnss3, libnspr4, librados2,
>> > > librbd1. And then execute "modprobe rbd" so that I can map a image to
>> > > my vm.
>> > > Then "rbd create foo --size 10240 -m $monIP(my ceph mon IP)",
>> > > "rbd map foo -m $monIP" ------ Here a device /dev/rbd0 can be
>> > > used as a local device
>> > > "mkfs -t ext4 /dev/rbd0"
>> > > "mount /dev/rbd0 /mnt(or some other directory)"
>> > > After the operations above, I can use this device. But it oftern
>> > > prompt some log like "libceph: osd9 10.100.211.68:6809 socket closed".
>> > > I just want to mount a device to my vm, so I didn't install a ceph
>> > > client. Is this proper to do so?
>> >
>> >
>> >
>> > You might consider using the native QEMU/libvirt instead; it offers some more advanced options. But if you're happy with it, this certainly works!
>> >
>> > The "socket closed" messages are just noise; it's nothing to be concerned about (you'll notice they're happening every 15 minutes for each OSD; probably you aren't doing any disk accesses). I think these warnings actually got removed from our master branch a few days ago.
>> > -Greg
>>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* re:
@ 2013-08-23 6:18 info
0 siblings, 0 replies; 19+ messages in thread
From: info @ 2013-08-23 6:18 UTC (permalink / raw)
To: ceph-devel
Hello,
Compliments and good day to you and your family.
Without wasting much of your time i want to bring you into a business
venture which i think should be of interest and concern to you, since it has
to do with a perceived family member of yours. However i need to
be sure that you must have received this communication so i will not divulge
much information about it until i get a response from you.
Kindly respond back to me.
Regards,
David
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE:
@ 2015-08-11 10:57 zso2bytom
0 siblings, 0 replies; 19+ messages in thread
From: zso2bytom @ 2015-08-11 10:57 UTC (permalink / raw)
To: Recipients
Teraz mozesz uzyskac kredyt w wysokosci 2% za uniewaznic i dostac do 40 lat lub wiecej, aby go splacac. Nie naleza do kredytów krótkoterminowych, które sprawiaja, ze zwróci sie w kilka tygodni lub miesiecy. Nasza oferta obejmuje; * Refinansowanie * Home Improvement * Kredyty samochodowe * Konsolidacja zadluzenia * Linia kredytowa * Po drugie hipoteczny * Biznes Pozyczki * Osobiste Pozyczki
Zdobadz pieniadze potrzebne dzis z duza iloscia czasu, aby dokonac platnosci powrotem. Aby zastosowac, aby wyslac wszystkie pytania lub wezwania : flowellhelpdesk@gmail.com + 1- 435-241-5945
---
This email is free from viruses and malware because avast! Antivirus protection is active.
https://www.avast.com/antivirus
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE:
@ 2015-11-01 20:03 Mario, Franco
0 siblings, 0 replies; 19+ messages in thread
From: Mario, Franco @ 2015-11-01 20:03 UTC (permalink / raw)
To: Recipients
Confirm your email if it current!!!
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
@ 2017-05-03 11:26 Paul Lopez-Bravo
0 siblings, 0 replies; 19+ messages in thread
From: Paul Lopez-Bravo @ 2017-05-03 11:26 UTC (permalink / raw)
--
Hallo,
Erlauben Sie mir, diese sehr wichtige Anfrage durch diesen Median
aufgrund seiner vertraulichen Natur zu machen. Mein Name ist Herr Paul
Lopez-Bravo, ein Rechtsanwalt in Spanien. Ich vertrete Late Philip, der
vor seinem Tod im Jahr 2009 ein reicher Unternehmer war. Ich vertraue
dir in einer dringenden Angelegenheit an, die sich auf eine Kaution
bezieht, die von diesem besonderen Klienten von mir vor seinem Tod
gemacht wurde. Ich suche für Ihre Zustimmung, mich zu ermächtigen,
Ihnen als seinen Erben zu präsentieren, um seine Bank zu veranlassen,
die Summe von $ 7.5Million (Sieben Million Fünfhundert Tausend Dollar),
die in einem suspendierten Bankkonto hinterlegt werden, zu übergeben.
Seine Bank hat mir ein letztes Ultimatum als sein Anwalt ausgegeben, um
seinen Erben zu präsentieren, da die gesetzlich zulässige Zeit für eine
solche Forderung abgelaufen ist, sonst wird der Fonds beschlagnahmt.
Die beabsichtigte Transaktion wird unter einer legitimen Art und Weise
durchgeführt, die Sie und ich vor jeglicher Rechtsverletzung schützen
wird. Ich werde meine Position als Mandantenanwalt nutzen, um die
Bearbeitung der benötigten rechtlichen Dokumentationen und die
erfolgreiche Durchführung dieser Transaktion zu gewährleisten. Alles
was ich verlange ist Ihr Verständnis und ehrliche Zusammenarbeit für
den Erfolg. Beachten Sie, dass nach der erfolgreichen Durchführung der
Transaktion, halten Sie 40% des gesamten Fonds nach allen Kosten.
Ich gebe Ihnen ausführliche Details, wenn Sie Ihr Interesse
bestätigen.
Ich hoffe, von Ihnen bald zu hören.
Mit freundlichen Grüßen
Paul Lopez-Bravo Esq
Tel: +34692899384
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
@ 2017-11-13 15:04 Amos Kalonzo
0 siblings, 0 replies; 19+ messages in thread
From: Amos Kalonzo @ 2017-11-13 15:04 UTC (permalink / raw)
Attn:
I am wondering why You haven't respond to my email for some days now.
reference to my client's contract balance payment of (11.7M,USD)
Kindly get back to me for more details.
Best Regards
Amos Kalonzo
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
2020-03-27 8:36 (unknown) chenanqing
@ 2020-03-27 8:59 ` Ilya Dryomov
0 siblings, 0 replies; 19+ messages in thread
From: Ilya Dryomov @ 2020-03-27 8:59 UTC (permalink / raw)
To: chenanqing; +Cc: LKML, netdev, Ceph Development, kuba, Sage Weil, Jeff Layton
On Fri, Mar 27, 2020 at 9:36 AM <chenanqing@oppo.com> wrote:
>
> From: Chen Anqing <chenanqing@oppo.com>
> To: Ilya Dryomov <idryomov@gmail.com>
> Cc: Jeff Layton <jlayton@kernel.org>,
> Sage Weil <sage@redhat.com>,
> Jakub Kicinski <kuba@kernel.org>,
> ceph-devel@vger.kernel.org,
> netdev@vger.kernel.org,
> linux-kernel@vger.kernel.org,
> chenanqing@oppo.com
> Subject: [PATCH] libceph: we should take compound page into account also
> Date: Fri, 27 Mar 2020 04:36:30 -0400
> Message-Id: <20200327083630.36296-1-chenanqing@oppo.com>
> X-Mailer: git-send-email 2.18.2
>
> the patch is occur at a real crash,which slab is
> come from a compound page,so we need take the compound page
> into account also.
> fixed commit 7e241f647dc7 ("libceph: fall back to sendmsg for slab pages")'
>
> Signed-off-by: Chen Anqing <chenanqing@oppo.com>
> ---
> net/ceph/messenger.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
> index f8ca5edc5f2c..e08c1c334cd9 100644
> --- a/net/ceph/messenger.c
> +++ b/net/ceph/messenger.c
> @@ -582,7 +582,7 @@ static int ceph_tcp_sendpage(struct socket *sock, struct page *page,
> * coalescing neighboring slab objects into a single frag which
> * triggers one of hardened usercopy checks.
> */
> - if (page_count(page) >= 1 && !PageSlab(page))
> + if (page_count(page) >= 1 && !PageSlab(compound_head(page)))
> sendpage = sock->ops->sendpage;
> else
> sendpage = sock_no_sendpage;
Hi Chen,
AFAICT compound pages should already be taken into account, because
PageSlab is defined as:
__PAGEFLAG(Slab, slab, PF_NO_TAIL)
#define __PAGEFLAG(uname, lname, policy) \
TESTPAGEFLAG(uname, lname, policy) \
__SETPAGEFLAG(uname, lname, policy) \
__CLEARPAGEFLAG(uname, lname, policy)
#define TESTPAGEFLAG(uname, lname, policy) \
static __always_inline int Page##uname(struct page *page) \
{ return test_bit(PG_##lname, &policy(page, 0)->flags); }
and PF_NO_TAIL policy is defined as:
#define PF_NO_TAIL(page, enforce) ({ \
VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page); \
PF_POISONED_CHECK(compound_head(page)); })
So compound_head() is called behind the scenes.
Could you please explain what crash did you observe in more detail?
Perhaps you backported this patch to an older kernel?
Thanks,
Ilya
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re:
[not found] ` <5e7dc543.vYG3wru8B/me1sOV%chenanqing-Oq79sGaMObY@public.gmane.org>
@ 2020-03-27 15:53 ` Lee Duncan
0 siblings, 0 replies; 19+ messages in thread
From: Lee Duncan @ 2020-03-27 15:53 UTC (permalink / raw)
To: chenanqing-Oq79sGaMObY, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
linux-scsi-u79uwXL29TY76Z2rM5mHXA,
open-iscsi-/JYPxA39Uh5TLH3MbocFFw,
ceph-devel-u79uwXL29TY76Z2rM5mHXA,
martin.petersen-QHcLZuEGTsvQT0dZR+AlfA,
jejb-tEXmvtCZX7AybS5Ee8rs3A, cleech-H+wXaHxf7aLQT0dZR+AlfA
On 3/27/20 2:20 AM, chenanqing-Oq79sGaMObY@public.gmane.org wrote:
> From: Chen Anqing <chenanqing-Oq79sGaMObY@public.gmane.org>
> To: Lee Duncan <lduncan-IBi9RG/b67k@public.gmane.org>
> Cc: Chris Leech <cleech-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
> "James E . J . Bottomley" <jejb-tEXmvtCZX7AybS5Ee8rs3A@public.gmane.org>,
> "Martin K . Petersen" <martin.petersen-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>,
> ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
> open-iscsi-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org,
> linux-scsi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
> linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
> chenanqing-Oq79sGaMObY@public.gmane.org
> Subject: [PATCH] scsi: libiscsi: we should take compound page into account also
> Date: Fri, 27 Mar 2020 05:20:01 -0400
> Message-Id: <20200327092001.56879-1-chenanqing-Oq79sGaMObY@public.gmane.org>
> X-Mailer: git-send-email 2.18.2
>
> the patch is occur at a real crash,which slab is
> come from a compound page,so we need take the compound page
> into account also.
> fixed commit 08b11eaccfcf ("scsi: libiscsi: fall back to
> sendmsg for slab pages").
>
> Signed-off-by: Chen Anqing <chenanqing-Oq79sGaMObY@public.gmane.org>
> ---
> drivers/scsi/libiscsi_tcp.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/scsi/libiscsi_tcp.c b/drivers/scsi/libiscsi_tcp.c
> index 6ef93c7af954..98304e5e1f6f 100644
> --- a/drivers/scsi/libiscsi_tcp.c
> +++ b/drivers/scsi/libiscsi_tcp.c
> @@ -128,7 +128,8 @@ static void iscsi_tcp_segment_map(struct iscsi_segment *segment, int recv)
> * coalescing neighboring slab objects into a single frag which
> * triggers one of hardened usercopy checks.
> */
> - if (!recv && page_count(sg_page(sg)) >= 1 && !PageSlab(sg_page(sg)))
> + if (!recv && page_count(sg_page(sg)) >= 1 &&
> + !PageSlab(compound_head(sg_page(sg))))
> return;
>
> if (recv) {
> --
> 2.18.2
>
This is missing a proper subject ...
--
You received this message because you are subscribed to the Google Groups "open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-iscsi+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To view this discussion on the web visit https://groups.google.com/d/msgid/open-iscsi/5462bc04-8409-a0c3-628f-640d1c92b8c6%40suse.com.
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2020-03-27 15:53 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-05-03 11:26 Paul Lopez-Bravo
-- strict thread matches above, loose matches on Subject: below --
2020-03-27 9:20 (unknown) chenanqing
[not found] ` <5e7dc543.vYG3wru8B/me1sOV%chenanqing-Oq79sGaMObY@public.gmane.org>
2020-03-27 15:53 ` Lee Duncan
2020-03-27 8:36 (unknown) chenanqing
2020-03-27 8:59 ` Ilya Dryomov
2017-11-13 15:04 Re: Amos Kalonzo
2015-11-01 20:03 Mario, Franco
2015-08-11 10:57 RE: zso2bytom
2013-08-23 6:18 info
2012-10-23 4:12 (unknown), jie sun
2012-10-23 11:50 ` Wido den Hollander
2012-10-24 5:48 ` Re: jie sun
2012-10-24 5:58 ` Re: Gregory Farnum
[not found] ` <CAB6Jr7SbbAE=yEVgg+UupTmavKfvFvGj8j7C9M0Ya2FocNmw9w@mail.gmail.com>
2012-10-25 12:15 ` Re: Gregory Farnum
2012-10-25 14:36 ` Re: Alex Elder
2012-10-25 15:38 ` Re: Sage Weil
2012-10-25 21:28 ` Re: Dan Mick
2012-10-25 22:15 ` Re: Alex Elder
2012-10-26 3:08 ` Re: jie sun
[not found] <4FD71854.6060503@hastexo.com>
2012-06-12 10:44 ` "Radosgw installation and administration" docs Florian Haas
2012-06-12 16:47 ` Yehuda Sadeh
2012-06-12 18:11 ` Florian Haas
2012-06-12 18:54 ` Yehuda Sadeh
2012-07-01 20:22 ` Chuanyu
2012-07-02 9:35 ` Chuanyu Tsai
2012-05-08 0:54 (unknown), Tim Flavin
2012-05-17 21:10 ` Josh Durgin
2011-08-15 23:01 Re: jeffrice
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox