From mboxrd@z Thu Jan 1 00:00:00 1970 From: Todd Previte Subject: Re: [PATCH 07/11] drm/helper: add Displayport multi-stream helper (v0.5) Date: Tue, 17 Jun 2014 08:23:23 -0700 Message-ID: <53A05D6B.7020406@gmail.com> References: <1400640904-16847-1-git-send-email-airlied@gmail.com> <1400640904-16847-8-git-send-email-airlied@gmail.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1231931286==" Return-path: In-Reply-To: <1400640904-16847-8-git-send-email-airlied@gmail.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" To: Dave Airlie Cc: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org List-Id: dri-devel@lists.freedesktop.org This is a multi-part message in MIME format. --===============1231931286== Content-Type: multipart/alternative; boundary="------------070900050502070101020808" This is a multi-part message in MIME format. --------------070900050502070101020808 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: quoted-printable This patch is a monster, but that's to be expected with MST, I suppose.=20 :) It has some formatting issues (lines over 80 characters in length)=20 but that can be cleaned up later (as far as I'm concerned). Otherwise I=20 don't see anything glaring here, so... Reviewed-by: Todd Previte > Dave Airlie > Tuesday, May 20, 2014 7:55 PM > From: Dave Airlie > > This is the initial import of the helper for displayport multistream. > > It consists of a topology manager, init/destroy/set mst state > > It supports DP 1.2 MST sideband msg protocol handler - via hpd irqs > > connector detect and edid retrieval interface. > > It supports i2c device over DP 1.2 sideband msg protocol (EDID reads on= ly) > > bandwidth manager API via vcpi allocation and payload updating, > along with a helper to check the ACT status. > > Objects: > MST topology manager - one per toplevel MST capable GPU port - not=20 > sure if this should be higher level again > MST branch unit - one instance per plugged branching unit - one at top=20 > of hierarchy - others hanging from ports > MST port - one port per port reported by branching units, can have MST=20 > units hanging from them as well. > > Changes since initial posting: > a) add a mutex responsbile for the queues, it locks the sideband and=20 > msg slots, and msgs to transmit state > b) add worker to handle connection state change events, for MST device=20 > chaining and hotplug > c) add a payload spinlock > d) add path sideband msg support > e) fixup enum path resources transmit > f) reduce max dpcd msg to 16, as per DP1.2 spec. > g) separate tx queue kicking from irq processing and move irq acking=20 > back to drivers. > > Changes since v0.2: > a) reorganise code, > b) drop ACT forcing code > c) add connector naming interface using path property > d) add topology dumper helper > e) proper reference counting and lookup for ports and mstbs. > f) move tx kicking into a workq > g) add aux locking - this should be redone > h) split teardown into two parts > i) start working on documentation on interface. > > Changes since v0.3: > a) vc payload locking and tracking fixes > b) add hotplug callback into driver - replaces crazy return 1 scheme > c) txmsg + mst branch device refcount fixes > d) don't bail on mst shutdown if device is gone > e) change irq handler to take all 4 bytes of SINK_COUNT + ESI vectors > f) make DP payload updates timeout longer - observed on docking=20 > station redock > g) add more info to debugfs dumper > > Changes since v0.4: > a) suspend/resume support > b) more debugging in debugfs > > TODO: > misc features > > Signed-off-by: Dave Airlie > --- > Documentation/DocBook/drm.tmpl | 6 + > drivers/gpu/drm/Makefile | 2 +- > drivers/gpu/drm/drm_dp_mst_topology.c | 2739=20 > +++++++++++++++++++++++++++++++++ > include/drm/drm_dp_mst_helper.h | 507 ++++++ > 4 files changed, 3253 insertions(+), 1 deletion(-) > create mode 100644 drivers/gpu/drm/drm_dp_mst_topology.c > create mode 100644 include/drm/drm_dp_mst_helper.h > > diff --git a/Documentation/DocBook/drm.tmpl=20 > b/Documentation/DocBook/drm.tmpl > index 83dd0b0..1883976 100644 > --- a/Documentation/DocBook/drm.tmpl > +++ b/Documentation/DocBook/drm.tmpl > @@ -2296,6 +2296,12 @@ void intel_crt_init(struct drm_device *dev) > !Edrivers/gpu/drm/drm_dp_helper.c > > > + Display Port MST Helper Functions Reference > +!Pdrivers/gpu/drm/drm_dp_mst_topology.c dp mst helper > +!Iinclude/drm/drm_dp_mst_helper.h > +!Edrivers/gpu/drm/drm_dp_mst_topology.c > + > + > EDID Helper Functions Reference > !Edrivers/gpu/drm/drm_edid.c > > diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile > index 48e38ba..712b73e 100644 > --- a/drivers/gpu/drm/Makefile > +++ b/drivers/gpu/drm/Makefile > @@ -23,7 +23,7 @@ drm-$(CONFIG_DRM_PANEL) +=3D drm_panel.o > > drm-usb-y :=3D drm_usb.o > > -drm_kms_helper-y :=3D drm_crtc_helper.o drm_dp_helper.o drm_probe_help= er.o > +drm_kms_helper-y :=3D drm_crtc_helper.o drm_dp_helper.o=20 > drm_probe_helper.o drm_dp_mst_topology.o > drm_kms_helper-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) +=3D drm_edid_load.o > drm_kms_helper-$(CONFIG_DRM_KMS_FB_HELPER) +=3D drm_fb_helper.o > drm_kms_helper-$(CONFIG_DRM_KMS_CMA_HELPER) +=3D drm_fb_cma_helper.o > diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c=20 > b/drivers/gpu/drm/drm_dp_mst_topology.c > new file mode 100644 > index 0000000..ebd9292 > --- /dev/null > +++ b/drivers/gpu/drm/drm_dp_mst_topology.c > @@ -0,0 +1,2739 @@ > +/* > + * Copyright =C2=A9 2014 Red Hat > + * > + * Permission to use, copy, modify, distribute, and sell this=20 > software and its > + * documentation for any purpose is hereby granted without fee,=20 > provided that > + * the above copyright notice appear in all copies and that both that=20 > copyright > + * notice and this permission notice appear in supporting=20 > documentation, and > + * that the name of the copyright holders not be used in advertising o= r > + * publicity pertaining to distribution of the software without specif= ic, > + * written prior permission. The copyright holders make no=20 > representations > + * about the suitability of this software for any purpose. It is=20 > provided "as > + * is" without express or implied warranty. > + * > + * THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS=20 > SOFTWARE, > + * INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN= NO > + * EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL,=20 > INDIRECT OR > + * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM=20 > LOSS OF USE, > + * DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OT= HER > + * TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR=20 > PERFORMANCE > + * OF THIS SOFTWARE. > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include > + > +/** > + * DOC: dp mst helper > + * > + * These functions contain parts of the DisplayPort 1.2a MultiStream=20 > Transport > + * protocol. The helpers contain a topology manager and bandwidth=20 > manager. > + * The helpers encapsulate the sending and received of sideband msgs. > + */ > +static bool dump_dp_payload_table(struct drm_dp_mst_topology_mgr *mgr, > + char *buf); > +static int test_calc_pbn_mode(void); > + > +static void drm_dp_put_port(struct drm_dp_mst_port *port); > + > +static int drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *m= gr, > + int id, > + struct drm_dp_payload *payload); > + > +static int drm_dp_send_dpcd_write(struct drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_mst_port *port, > + int offset, int size, u8 *bytes); > + > +static int drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mg= r, > + struct drm_dp_mst_branch *mstb); > +static int drm_dp_send_enum_path_resources(struct=20 > drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_mst_branch *mstb, > + struct drm_dp_mst_port *port); > +static bool drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr, > + u8 *guid); > + > +static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux); > +static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux); > +static void drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr); > +/* sideband msg handling */ > +static u8 drm_dp_msg_header_crc4(const uint8_t *data, size_t num_nibbl= es) > +{ > + u8 bitmask =3D 0x80; > + u8 bitshift =3D 7; > + u8 array_index =3D 0; > + int number_of_bits =3D num_nibbles * 4; > + u8 remainder =3D 0; > + > + while (number_of_bits !=3D 0) { > + number_of_bits--; > + remainder <<=3D 1; > + remainder |=3D (data[array_index] & bitmask) >> bitshift; > + bitmask >>=3D 1; > + bitshift--; > + if (bitmask =3D=3D 0) { > + bitmask =3D 0x80; > + bitshift =3D 7; > + array_index++; > + } > + if ((remainder & 0x10) =3D=3D 0x10) > + remainder ^=3D 0x13; > + } > + > + number_of_bits =3D 4; > + while (number_of_bits !=3D 0) { > + number_of_bits--; > + remainder <<=3D 1; > + if ((remainder & 0x10) !=3D 0) > + remainder ^=3D 0x13; > + } > + > + return remainder; > +} > + > +static u8 drm_dp_msg_data_crc4(const uint8_t *data, u8 number_of_bytes= ) > +{ > + u8 bitmask =3D 0x80; > + u8 bitshift =3D 7; > + u8 array_index =3D 0; > + int number_of_bits =3D number_of_bytes * 8; > + u16 remainder =3D 0; > + > + while (number_of_bits !=3D 0) { > + number_of_bits--; > + remainder <<=3D 1; > + remainder |=3D (data[array_index] & bitmask) >> bitshift; > + bitmask >>=3D 1; > + bitshift--; > + if (bitmask =3D=3D 0) { > + bitmask =3D 0x80; > + bitshift =3D 7; > + array_index++; > + } > + if ((remainder & 0x100) =3D=3D 0x100) > + remainder ^=3D 0xd5; > + } > + > + number_of_bits =3D 8; > + while (number_of_bits !=3D 0) { > + number_of_bits--; > + remainder <<=3D 1; > + if ((remainder & 0x100) !=3D 0) > > + remainder ^=3D 0xd5; > + } > + > + return remainder & 0xff; > +} > +static inline u8 drm_dp_calc_sb_hdr_size(struct=20 > drm_dp_sideband_msg_hdr *hdr) > +{ > + u8 size =3D 3; > + size +=3D (hdr->lct / 2); > + return size; > +} > + > +static void drm_dp_encode_sideband_msg_hdr(struct=20 > drm_dp_sideband_msg_hdr *hdr, > + u8 *buf, int *len) > +{ > + int idx =3D 0; > + int i; > + u8 crc4; > + buf[idx++] =3D ((hdr->lct & 0xf) << 4) | (hdr->lcr & 0xf); > + for (i =3D 0; i < (hdr->lct / 2); i++) > + buf[idx++] =3D hdr->rad[i]; > + buf[idx++] =3D (hdr->broadcast << 7) | (hdr->path_msg << 6) | > + (hdr->msg_len & 0x3f); > + buf[idx++] =3D (hdr->somt << 7) | (hdr->eomt << 6) | (hdr->seqno << 4= ); > + > + crc4 =3D drm_dp_msg_header_crc4(buf, (idx * 2) - 1); > + buf[idx - 1] |=3D (crc4 & 0xf); > + > + *len =3D idx; > +} > + > +static bool drm_dp_decode_sideband_msg_hdr(struct=20 > drm_dp_sideband_msg_hdr *hdr, > + u8 *buf, int buflen, u8 *hdrlen) > +{ > + u8 crc4; > + u8 len; > + int i; > + u8 idx; > + if (buf[0] =3D=3D 0) > + return false; > + len =3D 3; > + len +=3D ((buf[0] & 0xf0) >> 4) / 2; > + if (len > buflen) > + return false; > + crc4 =3D drm_dp_msg_header_crc4(buf, (len * 2) - 1); > + > + if ((crc4 & 0xf) !=3D (buf[len - 1] & 0xf)) { > + DRM_DEBUG_KMS("crc4 mismatch 0x%x 0x%x\n", crc4, buf[len - 1]); > + return false; > + } > + > + hdr->lct =3D (buf[0] & 0xf0) >> 4; > + hdr->lcr =3D (buf[0] & 0xf); > + idx =3D 1; > + for (i =3D 0; i < (hdr->lct / 2); i++) > + hdr->rad[i] =3D buf[idx++]; > + hdr->broadcast =3D (buf[idx] >> 7) & 0x1; > + hdr->path_msg =3D (buf[idx] >> 6) & 0x1; > + hdr->msg_len =3D buf[idx] & 0x3f; > + idx++; > + hdr->somt =3D (buf[idx] >> 7) & 0x1; > + hdr->eomt =3D (buf[idx] >> 6) & 0x1; > + hdr->seqno =3D (buf[idx] >> 4) & 0x1; > + idx++; > + *hdrlen =3D idx; > + return true; > +} > + > +static void drm_dp_encode_sideband_req(struct=20 > drm_dp_sideband_msg_req_body *req, > + struct drm_dp_sideband_msg_tx *raw) > +{ > + int idx =3D 0; > + int i; > + u8 *buf =3D raw->msg; > + buf[idx++] =3D req->req_type & 0x7f; > + > + switch (req->req_type) { > + case DP_ENUM_PATH_RESOURCES: > + buf[idx] =3D (req->u.port_num.port_number & 0xf) << 4; > + idx++; > + break; > + case DP_ALLOCATE_PAYLOAD: > + buf[idx] =3D (req->u.allocate_payload.port_number & 0xf) << 4 | > + (req->u.allocate_payload.number_sdp_streams & 0xf); > + idx++; > + buf[idx] =3D (req->u.allocate_payload.vcpi & 0x7f); > + idx++; > + buf[idx] =3D (req->u.allocate_payload.pbn >> 8); > + idx++; > + buf[idx] =3D (req->u.allocate_payload.pbn & 0xff); > + idx++; > + for (i =3D 0; i < req->u.allocate_payload.number_sdp_streams / 2; i++= ) { > + buf[idx] =3D ((req->u.allocate_payload.sdp_stream_sink[i * 2] & 0xf)=20 > << 4) | > + (req->u.allocate_payload.sdp_stream_sink[i * 2 + 1] & 0xf); > + idx++; > + } > + if (req->u.allocate_payload.number_sdp_streams & 1) { > + i =3D req->u.allocate_payload.number_sdp_streams - 1; > + buf[idx] =3D (req->u.allocate_payload.sdp_stream_sink[i] & 0xf) << 4; > + idx++; > + } > + break; > + case DP_QUERY_PAYLOAD: > + buf[idx] =3D (req->u.query_payload.port_number & 0xf) << 4; > + idx++; > + buf[idx] =3D (req->u.query_payload.vcpi & 0x7f); > + idx++; > + break; > + case DP_REMOTE_DPCD_READ: > + buf[idx] =3D (req->u.dpcd_read.port_number & 0xf) << 4; > + buf[idx] |=3D ((req->u.dpcd_read.dpcd_address & 0xf0000) >> 16) & 0xf= ; > + idx++; > + buf[idx] =3D (req->u.dpcd_read.dpcd_address & 0xff00) >> 8; > + idx++; > + buf[idx] =3D (req->u.dpcd_read.dpcd_address & 0xff); > + idx++; > + buf[idx] =3D (req->u.dpcd_read.num_bytes); > + idx++; > + break; > + > + case DP_REMOTE_DPCD_WRITE: > + buf[idx] =3D (req->u.dpcd_write.port_number & 0xf) << 4; > + buf[idx] |=3D ((req->u.dpcd_write.dpcd_address & 0xf0000) >> 16) & 0x= f; > + idx++; > + buf[idx] =3D (req->u.dpcd_write.dpcd_address & 0xff00) >> 8; > + idx++; > + buf[idx] =3D (req->u.dpcd_write.dpcd_address & 0xff); > + idx++; > + buf[idx] =3D (req->u.dpcd_write.num_bytes); > + idx++; > + memcpy(&buf[idx], req->u.dpcd_write.bytes, req->u.dpcd_write.num_byte= s); > + idx +=3D req->u.dpcd_write.num_bytes; > + break; > + case DP_REMOTE_I2C_READ: > + buf[idx] =3D (req->u.i2c_read.port_number & 0xf) << 4; > + buf[idx] |=3D (req->u.i2c_read.num_transactions & 0x3); > + idx++; > + for (i =3D 0; i < (req->u.i2c_read.num_transactions & 0x3); i++) { > + buf[idx] =3D req->u.i2c_read.transactions[i].i2c_dev_id & 0x7f; > > + idx++; > + buf[idx] =3D req->u.i2c_read.transactions[i].num_bytes; > + idx++; > + memcpy(&buf[idx], req->u.i2c_read.transactions[i].bytes,=20 > req->u.i2c_read.transactions[i].num_bytes); > + idx +=3D req->u.i2c_read.transactions[i].num_bytes; > + > + buf[idx] =3D (req->u.i2c_read.transactions[i].no_stop_bit & 0x1) << 5= ; > + buf[idx] |=3D (req->u.i2c_read.transactions[i].i2c_transaction_delay = &=20 > 0xf); > + idx++; > + } > + buf[idx] =3D (req->u.i2c_read.read_i2c_device_id) & 0x7f; > + idx++; > + buf[idx] =3D (req->u.i2c_read.num_bytes_read); > + idx++; > + break; > + > + case DP_REMOTE_I2C_WRITE: > + buf[idx] =3D (req->u.i2c_write.port_number & 0xf) << 4; > + idx++; > + buf[idx] =3D (req->u.i2c_write.write_i2c_device_id) & 0x7f; > + idx++; > + buf[idx] =3D (req->u.i2c_write.num_bytes); > + idx++; > + memcpy(&buf[idx], req->u.i2c_write.bytes, req->u.i2c_write.num_bytes)= ; > + idx +=3D req->u.i2c_write.num_bytes; > + break; > + } > + raw->cur_len =3D idx; > +} > + > +static void drm_dp_crc_sideband_chunk_req(u8 *msg, u8 len) > +{ > + u8 crc4; > + crc4 =3D drm_dp_msg_data_crc4(msg, len); > + msg[len] =3D crc4; > +} > + > +static void drm_dp_encode_sideband_reply(struct=20 > drm_dp_sideband_msg_reply_body *rep, > + struct drm_dp_sideband_msg_tx *raw) > +{ > + int idx =3D 0; > + u8 *buf =3D raw->msg; > + > + buf[idx++] =3D (rep->reply_type & 0x1) << 7 | (rep->req_type & 0x7f); > + > + raw->cur_len =3D idx; > +} > + > +/* this adds a chunk of msg to the builder to get the final msg */ > +static bool drm_dp_sideband_msg_build(struct drm_dp_sideband_msg_rx *m= sg, > + u8 *replybuf, u8 replybuflen, bool hdr) > +{ > + int ret; > + u8 crc4; > + > + if (hdr) { > + u8 hdrlen; > + struct drm_dp_sideband_msg_hdr recv_hdr; > + ret =3D drm_dp_decode_sideband_msg_hdr(&recv_hdr, replybuf,=20 > replybuflen, &hdrlen); > + if (ret =3D=3D false) { > + print_hex_dump(KERN_DEBUG, "failed hdr", DUMP_PREFIX_NONE, 16, 1,=20 > replybuf, replybuflen, false); > + return false; > + } > + > + /* get length contained in this portion */ > + msg->curchunk_len =3D recv_hdr.msg_len; > + msg->curchunk_hdrlen =3D hdrlen; > + > + /* we have already gotten an somt - don't bother parsing */ > + if (recv_hdr.somt && msg->have_somt) > + return false; > + > + if (recv_hdr.somt) { > + memcpy(&msg->initial_hdr, &recv_hdr, sizeof(struct=20 > drm_dp_sideband_msg_hdr)); > + msg->have_somt =3D true; > + } > + if (recv_hdr.eomt) > + msg->have_eomt =3D true; > + > + /* copy the bytes for the remainder of this header chunk */ > + msg->curchunk_idx =3D min(msg->curchunk_len, (u8)(replybuflen - hdrle= n)); > + memcpy(&msg->chunk[0], replybuf + hdrlen, msg->curchunk_idx); > + } else { > + memcpy(&msg->chunk[msg->curchunk_idx], replybuf, replybuflen); > + msg->curchunk_idx +=3D replybuflen; > + } > + > + if (msg->curchunk_idx >=3D msg->curchunk_len) { > + /* do CRC */ > + crc4 =3D drm_dp_msg_data_crc4(msg->chunk, msg->curchunk_len - 1); > + /* copy chunk into bigger msg */ > + memcpy(&msg->msg[msg->curlen], msg->chunk, msg->curchunk_len - 1); > + msg->curlen +=3D msg->curchunk_len - 1; > + } > + return true; > +} > + > +static bool drm_dp_sideband_parse_link_address(struct=20 > drm_dp_sideband_msg_rx *raw, > + struct drm_dp_sideband_msg_reply_body *repmsg) > +{ > + int idx =3D 1; > + int i; > + memcpy(repmsg->u.link_addr.guid, &raw->msg[idx], 16); > + idx +=3D 16; > + repmsg->u.link_addr.nports =3D raw->msg[idx] & 0xf; > + idx++; > + if (idx > raw->curlen) > + goto fail_len; > + for (i =3D 0; i < repmsg->u.link_addr.nports; i++) { > + if (raw->msg[idx] & 0x80) > + repmsg->u.link_addr.ports[i].input_port =3D 1; > + > + repmsg->u.link_addr.ports[i].peer_device_type =3D (raw->msg[idx] >> 4= )=20 > & 0x7; > + repmsg->u.link_addr.ports[i].port_number =3D (raw->msg[idx] & 0xf); > + > + idx++; > + if (idx > raw->curlen) > + goto fail_len; > + repmsg->u.link_addr.ports[i].mcs =3D (raw->msg[idx] >> 7) & 0x1; > + repmsg->u.link_addr.ports[i].ddps =3D (raw->msg[idx] >> 6) & 0x1; > + if (repmsg->u.link_addr.ports[i].input_port =3D=3D 0) > + repmsg->u.link_addr.ports[i].legacy_device_plug_status =3D=20 > (raw->msg[idx] >> 5) & 0x1; > + idx++; > + if (idx > raw->curlen) > + goto fail_len; > + if (repmsg->u.link_addr.ports[i].input_port =3D=3D 0) { > + repmsg->u.link_addr.ports[i].dpcd_revision =3D (raw->msg[idx]); > + idx++; > + if (idx > raw->curlen) > + goto fail_len; > + memcpy(repmsg->u.link_addr.ports[i].peer_guid, &raw->msg[idx], 16); > + idx +=3D 16; > + if (idx > raw->curlen) > + goto fail_len; > + repmsg->u.link_addr.ports[i].num_sdp_streams =3D (raw->msg[idx] >> 4)= =20 > & 0xf; > + repmsg->u.link_addr.ports[i].num_sdp_stream_sinks =3D (raw->msg[idx] = &=20 > 0xf); > + idx++; > + > + } > + if (idx > raw->curlen) > + goto fail_len; > + } > + > + return true; > +fail_len: > + DRM_DEBUG_KMS("link address reply parse length fail %d %d\n", idx,=20 > raw->curlen); > + return false; > +} > + > +static bool drm_dp_sideband_parse_remote_dpcd_read(struct=20 > drm_dp_sideband_msg_rx *raw, > + struct drm_dp_sideband_msg_reply_body *repmsg) > +{ > + int idx =3D 1; > + repmsg->u.remote_dpcd_read_ack.port_number =3D raw->msg[idx] & 0xf; > + idx++; > + if (idx > raw->curlen) > + goto fail_len; > + repmsg->u.remote_dpcd_read_ack.num_bytes =3D raw->msg[idx]; > + if (idx > raw->curlen) > + goto fail_len; > + > + memcpy(repmsg->u.remote_dpcd_read_ack.bytes, &raw->msg[idx],=20 > repmsg->u.remote_dpcd_read_ack.num_bytes); > + return true; > +fail_len: > + DRM_DEBUG_KMS("link address reply parse length fail %d %d\n", idx,=20 > raw->curlen); > + return false; > +} > + > +static bool drm_dp_sideband_parse_remote_dpcd_write(struct=20 > drm_dp_sideband_msg_rx *raw, > + struct drm_dp_sideband_msg_reply_body *repmsg) > +{ > + int idx =3D 1; > + repmsg->u.remote_dpcd_write_ack.port_number =3D raw->msg[idx] & 0xf; > + idx++; > + if (idx > raw->curlen) > + goto fail_len; > + return true; > +fail_len: > + DRM_DEBUG_KMS("parse length fail %d %d\n", idx, raw->curlen); > + return false; > +} > + > +static bool drm_dp_sideband_parse_remote_i2c_read_ack(struct=20 > drm_dp_sideband_msg_rx *raw, > + struct drm_dp_sideband_msg_reply_body *repmsg) > +{ > + int idx =3D 1; > + > + repmsg->u.remote_i2c_read_ack.port_number =3D (raw->msg[idx] & 0xf); > + idx++; > + if (idx > raw->curlen) > + goto fail_len; > + repmsg->u.remote_i2c_read_ack.num_bytes =3D raw->msg[idx]; > + idx++; > + /* TODO check */ > + memcpy(repmsg->u.remote_i2c_read_ack.bytes, &raw->msg[idx],=20 > repmsg->u.remote_i2c_read_ack.num_bytes); > + return true; > +fail_len: > + DRM_DEBUG_KMS("remote i2c reply parse length fail %d %d\n", idx,=20 > raw->curlen); > + return false; > +} > + > +static bool drm_dp_sideband_parse_enum_path_resources_ack(struct=20 > drm_dp_sideband_msg_rx *raw, > + struct drm_dp_sideband_msg_reply_body *repmsg) > +{ > + int idx =3D 1; > + repmsg->u.path_resources.port_number =3D (raw->msg[idx] >> 4) & 0xf; > + idx++; > + if (idx > raw->curlen) > + goto fail_len; > + repmsg->u.path_resources.full_payload_bw_number =3D (raw->msg[idx] <<= =20 > 8) | (raw->msg[idx+1]); > + idx +=3D 2; > + if (idx > raw->curlen) > + goto fail_len; > + repmsg->u.path_resources.avail_payload_bw_number =3D (raw->msg[idx] <= <=20 > 8) | (raw->msg[idx+1]); > + idx +=3D 2; > + if (idx > raw->curlen) > + goto fail_len; > + return true; > +fail_len: > + DRM_DEBUG_KMS("enum resource parse length fail %d %d\n", idx,=20 > raw->curlen); > + return false; > +} > + > +static bool drm_dp_sideband_parse_allocate_payload_ack(struct=20 > drm_dp_sideband_msg_rx *raw, > + struct drm_dp_sideband_msg_reply_body *repmsg) > +{ > + int idx =3D 1; > + repmsg->u.allocate_payload.port_number =3D (raw->msg[idx] >> 4) & 0xf= ; > + idx++; > + if (idx > raw->curlen) > + goto fail_len; > + repmsg->u.allocate_payload.vcpi =3D raw->msg[idx]; > + idx++; > + if (idx > raw->curlen) > + goto fail_len; > + repmsg->u.allocate_payload.allocated_pbn =3D (raw->msg[idx] << 8) |=20 > (raw->msg[idx+1]); > + idx +=3D 2; > + if (idx > raw->curlen) > + goto fail_len; > + return true; > +fail_len: > + DRM_DEBUG_KMS("allocate payload parse length fail %d %d\n", idx,=20 > raw->curlen); > + return false; > +} > + > +static bool drm_dp_sideband_parse_query_payload_ack(struct=20 > drm_dp_sideband_msg_rx *raw, > + struct drm_dp_sideband_msg_reply_body *repmsg) > +{ > + int idx =3D 1; > + repmsg->u.query_payload.port_number =3D (raw->msg[idx] >> 4) & 0xf; > + idx++; > + if (idx > raw->curlen) > + goto fail_len; > + repmsg->u.query_payload.allocated_pbn =3D (raw->msg[idx] << 8) |=20 > (raw->msg[idx + 1]); > + idx +=3D 2; > + if (idx > raw->curlen) > + goto fail_len; > + return true; > +fail_len: > + DRM_DEBUG_KMS("query payload parse length fail %d %d\n", idx,=20 > raw->curlen); > + return false; > +} > + > +static bool drm_dp_sideband_parse_reply(struct drm_dp_sideband_msg_rx=20 > *raw, > + struct drm_dp_sideband_msg_reply_body *msg) > +{ > + memset(msg, 0, sizeof(*msg)); > + msg->reply_type =3D (raw->msg[0] & 0x80) >> 7; > + msg->req_type =3D (raw->msg[0] & 0x7f); > + > + if (msg->reply_type) { > + memcpy(msg->u.nak.guid, &raw->msg[1], 16); > + msg->u.nak.reason =3D raw->msg[17]; > + msg->u.nak.nak_data =3D raw->msg[18]; > + return false; > + } > + > + switch (msg->req_type) { > + case DP_LINK_ADDRESS: > + return drm_dp_sideband_parse_link_address(raw, msg); > + case DP_QUERY_PAYLOAD: > + return drm_dp_sideband_parse_query_payload_ack(raw, msg); > + case DP_REMOTE_DPCD_READ: > + return drm_dp_sideband_parse_remote_dpcd_read(raw, msg); > + case DP_REMOTE_DPCD_WRITE: > + return drm_dp_sideband_parse_remote_dpcd_write(raw, msg); > + case DP_REMOTE_I2C_READ: > + return drm_dp_sideband_parse_remote_i2c_read_ack(raw, msg); > + case DP_ENUM_PATH_RESOURCES: > + return drm_dp_sideband_parse_enum_path_resources_ack(raw, msg); > + case DP_ALLOCATE_PAYLOAD: > + return drm_dp_sideband_parse_allocate_payload_ack(raw, msg); > + default: > + DRM_ERROR("Got unknown reply 0x%02x\n", msg->req_type); > + return false; > + } > +} > + > +static bool drm_dp_sideband_parse_connection_status_notify(struct=20 > drm_dp_sideband_msg_rx *raw, > + struct drm_dp_sideband_msg_req_body *msg) > +{ > + int idx =3D 1; > + > + msg->u.conn_stat.port_number =3D (raw->msg[idx] & 0xf0) >> 4; > + idx++; > + if (idx > raw->curlen) > + goto fail_len; > + > + memcpy(msg->u.conn_stat.guid, &raw->msg[idx], 16); > + idx +=3D 16; > + if (idx > raw->curlen) > + goto fail_len; > + > + msg->u.conn_stat.legacy_device_plug_status =3D (raw->msg[idx] >> 6) &= 0x1; > + msg->u.conn_stat.displayport_device_plug_status =3D (raw->msg[idx] >>= =20 > 5) & 0x1; > + msg->u.conn_stat.message_capability_status =3D (raw->msg[idx] >> 4) &= 0x1; > + msg->u.conn_stat.input_port =3D (raw->msg[idx] >> 3) & 0x1; > + msg->u.conn_stat.peer_device_type =3D (raw->msg[idx] & 0x7); > + idx++; > + return true; > +fail_len: > + DRM_DEBUG_KMS("connection status reply parse length fail %d %d\n",=20 > idx, raw->curlen); > + return false; > +} > + > +static bool drm_dp_sideband_parse_resource_status_notify(struct=20 > drm_dp_sideband_msg_rx *raw, > + struct drm_dp_sideband_msg_req_body *msg) > +{ > + int idx =3D 1; > + > + msg->u.resource_stat.port_number =3D (raw->msg[idx] & 0xf0) >> 4; > + idx++; > + if (idx > raw->curlen) > + goto fail_len; > + > + memcpy(msg->u.resource_stat.guid, &raw->msg[idx], 16); > + idx +=3D 16; > + if (idx > raw->curlen) > + goto fail_len; > + > + msg->u.resource_stat.available_pbn =3D (raw->msg[idx] << 8) |=20 > (raw->msg[idx + 1]); > + idx++; > + return true; > +fail_len: > + DRM_DEBUG_KMS("resource status reply parse length fail %d %d\n",=20 > idx, raw->curlen); > + return false; > +} > + > +static bool drm_dp_sideband_parse_req(struct drm_dp_sideband_msg_rx *r= aw, > + struct drm_dp_sideband_msg_req_body *msg) > +{ > + memset(msg, 0, sizeof(*msg)); > + msg->req_type =3D (raw->msg[0] & 0x7f); > + > + switch (msg->req_type) { > + case DP_CONNECTION_STATUS_NOTIFY: > + return drm_dp_sideband_parse_connection_status_notify(raw, msg); > + case DP_RESOURCE_STATUS_NOTIFY: > + return drm_dp_sideband_parse_resource_status_notify(raw, msg); > + default: > + DRM_ERROR("Got unknown request 0x%02x\n", msg->req_type); > + return false; > + } > +} > + > +static int build_dpcd_write(struct drm_dp_sideband_msg_tx *msg, u8=20 > port_num, u32 offset, u8 num_bytes, u8 *bytes) > +{ > + struct drm_dp_sideband_msg_req_body req; > + > + req.req_type =3D DP_REMOTE_DPCD_WRITE; > + req.u.dpcd_write.port_number =3D port_num; > + req.u.dpcd_write.dpcd_address =3D offset; > + req.u.dpcd_write.num_bytes =3D num_bytes; > + memcpy(req.u.dpcd_write.bytes, bytes, num_bytes); > + drm_dp_encode_sideband_req(&req, msg); > + > + return 0; > +} > + > +static int build_link_address(struct drm_dp_sideband_msg_tx *msg) > +{ > + struct drm_dp_sideband_msg_req_body req; > + > + req.req_type =3D DP_LINK_ADDRESS; > + drm_dp_encode_sideband_req(&req, msg); > + return 0; > +} > + > +static int build_enum_path_resources(struct drm_dp_sideband_msg_tx=20 > *msg, int port_num) > +{ > + struct drm_dp_sideband_msg_req_body req; > + > + req.req_type =3D DP_ENUM_PATH_RESOURCES; > + req.u.port_num.port_number =3D port_num; > + drm_dp_encode_sideband_req(&req, msg); > > + msg->path_msg =3D true; > + return 0; > +} > + > +static int build_allocate_payload(struct drm_dp_sideband_msg_tx *msg,=20 > int port_num, > + u8 vcpi, uint16_t pbn) > +{ > + struct drm_dp_sideband_msg_req_body req; > + memset(&req, 0, sizeof(req)); > + req.req_type =3D DP_ALLOCATE_PAYLOAD; > + req.u.allocate_payload.port_number =3D port_num; > + req.u.allocate_payload.vcpi =3D vcpi; > + req.u.allocate_payload.pbn =3D pbn; > + drm_dp_encode_sideband_req(&req, msg); > + msg->path_msg =3D true; > + return 0; > +} > + > +static int drm_dp_mst_assign_payload_id(struct=20 > drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_vcpi *vcpi) > +{ > + int ret; > + > + mutex_lock(&mgr->payload_lock); > + ret =3D find_first_zero_bit(&mgr->payload_mask, mgr->max_payloads + 1= ); > + if (ret > mgr->max_payloads) { > + ret =3D -EINVAL; > + DRM_DEBUG_KMS("out of payload ids %d\n", ret); > + goto out_unlock; > + } > + > + set_bit(ret, &mgr->payload_mask); > + vcpi->vcpi =3D ret; > + mgr->proposed_vcpis[ret - 1] =3D vcpi; > +out_unlock: > + mutex_unlock(&mgr->payload_lock); > + return ret; > +} > + > +static void drm_dp_mst_put_payload_id(struct drm_dp_mst_topology_mgr=20 > *mgr, > + int id) > +{ > + if (id =3D=3D 0) > + return; > + > + mutex_lock(&mgr->payload_lock); > + DRM_DEBUG_KMS("putting payload %d\n", id); > + clear_bit(id, &mgr->payload_mask); > + mgr->proposed_vcpis[id - 1] =3D NULL; > + mutex_unlock(&mgr->payload_lock); > +} > + > +static bool check_txmsg_state(struct drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_sideband_msg_tx *txmsg) > +{ > + bool ret; > + mutex_lock(&mgr->qlock); > + ret =3D (txmsg->state =3D=3D DRM_DP_SIDEBAND_TX_RX || > + txmsg->state =3D=3D DRM_DP_SIDEBAND_TX_TIMEOUT); > + mutex_unlock(&mgr->qlock); > + return ret; > +} > + > +static int drm_dp_mst_wait_tx_reply(struct drm_dp_mst_branch *mstb, > + struct drm_dp_sideband_msg_tx *txmsg) > +{ > + struct drm_dp_mst_topology_mgr *mgr =3D mstb->mgr; > + int ret; > + > + ret =3D wait_event_timeout(mgr->tx_waitq, > + check_txmsg_state(mgr, txmsg), > + (4 * HZ)); > + mutex_lock(&mstb->mgr->qlock); > + if (ret > 0) { > + if (txmsg->state =3D=3D DRM_DP_SIDEBAND_TX_TIMEOUT) { > + ret =3D -EIO; > + goto out; > + } > + } else { > + DRM_DEBUG_KMS("timedout msg send %p %d %d\n", txmsg, txmsg->state,=20 > txmsg->seqno); > + > + /* dump some state */ > + ret =3D -EIO; > + > + /* remove from q */ > + if (txmsg->state =3D=3D DRM_DP_SIDEBAND_TX_QUEUED || > + txmsg->state =3D=3D DRM_DP_SIDEBAND_TX_START_SEND) { > + list_del(&txmsg->next); > + } > + > + if (txmsg->state =3D=3D DRM_DP_SIDEBAND_TX_START_SEND || > + txmsg->state =3D=3D DRM_DP_SIDEBAND_TX_SENT) { > + mstb->tx_slots[txmsg->seqno] =3D NULL; > + } > + } > +out: > + mutex_unlock(&mgr->qlock); > + > + return ret; > +} > + > +static struct drm_dp_mst_branch *drm_dp_add_mst_branch_device(u8 lct,=20 > u8 *rad) > +{ > + struct drm_dp_mst_branch *mstb; > + > + mstb =3D kzalloc(sizeof(*mstb), GFP_KERNEL); > + if (!mstb) > + return NULL; > + > + mstb->lct =3D lct; > + if (lct > 1) > + memcpy(mstb->rad, rad, lct / 2); > + INIT_LIST_HEAD(&mstb->ports); > + kref_init(&mstb->kref); > + return mstb; > +} > + > +static void drm_dp_destroy_mst_branch_device(struct kref *kref) > +{ > + struct drm_dp_mst_branch *mstb =3D container_of(kref, struct=20 > drm_dp_mst_branch, kref); > + struct drm_dp_mst_port *port, *tmp; > + bool wake_tx =3D false; > + > + cancel_work_sync(&mstb->mgr->work); > + > + /* > + * destroy all ports - don't need lock > + * as there are no more references to the mst branch > + * device at this point. > + */ > + list_for_each_entry_safe(port, tmp, &mstb->ports, next) { > + list_del(&port->next); > + drm_dp_put_port(port); > + } > + > + /* drop any tx slots msg */ > + mutex_lock(&mstb->mgr->qlock); > + if (mstb->tx_slots[0]) { > + mstb->tx_slots[0]->state =3D DRM_DP_SIDEBAND_TX_TIMEOUT; > + mstb->tx_slots[0] =3D NULL; > + wake_tx =3D true; > + } > + if (mstb->tx_slots[1]) { > + mstb->tx_slots[1]->state =3D DRM_DP_SIDEBAND_TX_TIMEOUT; > + mstb->tx_slots[1] =3D NULL; > + wake_tx =3D true; > + } > + mutex_unlock(&mstb->mgr->qlock); > + > + if (wake_tx) > + wake_up(&mstb->mgr->tx_waitq); > + kfree(mstb); > +} > + > +static void drm_dp_put_mst_branch_device(struct drm_dp_mst_branch *mst= b) > +{ > + kref_put(&mstb->kref, drm_dp_destroy_mst_branch_device); > +} > + > + > +static void drm_dp_port_teardown_pdt(struct drm_dp_mst_port *port,=20 > int old_pdt) > +{ > + switch (old_pdt) { > + case DP_PEER_DEVICE_DP_LEGACY_CONV: > + case DP_PEER_DEVICE_SST_SINK: > + /* remove i2c over sideband */ > + drm_dp_mst_unregister_i2c_bus(&port->aux); > + break; > + case DP_PEER_DEVICE_MST_BRANCHING: > + drm_dp_put_mst_branch_device(port->mstb); > + port->mstb =3D NULL; > + break; > + } > +} > + > +static void drm_dp_destroy_port(struct kref *kref) > +{ > + struct drm_dp_mst_port *port =3D container_of(kref, struct=20 > drm_dp_mst_port, kref); > + struct drm_dp_mst_topology_mgr *mgr =3D port->mgr; > + if (!port->input) { > + port->vcpi.num_slots =3D 0; > + if (port->connector) > + (*port->mgr->cbs->destroy_connector)(mgr, port->connector); > + drm_dp_port_teardown_pdt(port, port->pdt); > + > + if (!port->input && port->vcpi.vcpi > 0) > + drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi); > + } > + kfree(port); > +} > + > +static void drm_dp_put_port(struct drm_dp_mst_port *port) > +{ > + kref_put(&port->kref, drm_dp_destroy_port); > +} > + > +static struct drm_dp_mst_branch=20 > *drm_dp_mst_get_validated_mstb_ref_locked(struct drm_dp_mst_branch=20 > *mstb, struct drm_dp_mst_branch *to_find) > +{ > + struct drm_dp_mst_port *port; > + struct drm_dp_mst_branch *rmstb; > + if (to_find =3D=3D mstb) { > + kref_get(&mstb->kref); > + return mstb; > + } > + list_for_each_entry(port, &mstb->ports, next) { > + if (port->mstb) { > + rmstb =3D drm_dp_mst_get_validated_mstb_ref_locked(port->mstb, to_fin= d); > + if (rmstb) > + return rmstb; > + } > + } > + return NULL; > +} > + > +static struct drm_dp_mst_branch *drm_dp_get_validated_mstb_ref(struct=20 > drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_branch *mstb) > +{ > + struct drm_dp_mst_branch *rmstb =3D NULL; > + mutex_lock(&mgr->lock); > + if (mgr->mst_primary) > + rmstb =3D drm_dp_mst_get_validated_mstb_ref_locked(mgr->mst_primary,=20 > mstb); > + mutex_unlock(&mgr->lock); > + return rmstb; > +} > + > +static struct drm_dp_mst_port *drm_dp_mst_get_port_ref_locked(struct=20 > drm_dp_mst_branch *mstb, struct drm_dp_mst_port *to_find) > +{ > + struct drm_dp_mst_port *port, *mport; > + > + list_for_each_entry(port, &mstb->ports, next) { > + if (port =3D=3D to_find) { > + kref_get(&port->kref); > + return port; > + } > + if (port->mstb) { > + mport =3D drm_dp_mst_get_port_ref_locked(port->mstb, to_find); > + if (mport) > + return mport; > + } > + } > + return NULL; > +} > + > +static struct drm_dp_mst_port *drm_dp_get_validated_port_ref(struct=20 > drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port) > +{ > + struct drm_dp_mst_port *rport =3D NULL; > + mutex_lock(&mgr->lock); > + if (mgr->mst_primary) > + rport =3D drm_dp_mst_get_port_ref_locked(mgr->mst_primary, port); > + mutex_unlock(&mgr->lock); > + return rport; > +} > + > +static struct drm_dp_mst_port *drm_dp_get_port(struct=20 > drm_dp_mst_branch *mstb, u8 port_num) > +{ > + struct drm_dp_mst_port *port; > + > + list_for_each_entry(port, &mstb->ports, next) { > + if (port->port_num =3D=3D port_num) { > + kref_get(&port->kref); > + return port; > + } > + } > + > + return NULL; > +} > + > +/* > + * calculate a new RAD for this MST branch device > + * if parent has an LCT of 2 then it has 1 nibble of RAD, > + * if parent has an LCT of 3 then it has 2 nibbles of RAD, > + */ > +static u8 drm_dp_calculate_rad(struct drm_dp_mst_port *port, > + u8 *rad) > +{ > + int lct =3D port->parent->lct; > + int shift =3D 4; > + int idx =3D lct / 2; > + if (lct > 1) { > + memcpy(rad, port->parent->rad, idx); > + shift =3D (lct % 2) ? 4 : 0; > + } else > + rad[0] =3D 0; > + > + rad[idx] |=3D port->port_num << shift; > + return lct + 1; > +} > + > +/* > + * return sends link address for new mstb > + */ > +static bool drm_dp_port_setup_pdt(struct drm_dp_mst_port *port) > +{ > + int ret; > + u8 rad[6], lct; > + bool send_link =3D false; > + switch (port->pdt) { > + case DP_PEER_DEVICE_DP_LEGACY_CONV: > + case DP_PEER_DEVICE_SST_SINK: > + /* add i2c over sideband */ > + ret =3D drm_dp_mst_register_i2c_bus(&port->aux); > + break; > + case DP_PEER_DEVICE_MST_BRANCHING: > + lct =3D drm_dp_calculate_rad(port, rad); > + > + port->mstb =3D drm_dp_add_mst_branch_device(lct, rad); > + port->mstb->mgr =3D port->mgr; > + port->mstb->port_parent =3D port; > + > + send_link =3D true; > + break; > + } > + return send_link; > +} > + > +static void drm_dp_check_port_guid(struct drm_dp_mst_branch *mstb, > + struct drm_dp_mst_port *port) > +{ > + int ret; > + if (port->dpcd_rev >=3D 0x12) { > + port->guid_valid =3D drm_dp_validate_guid(mstb->mgr, port->guid); > + if (!port->guid_valid) { > + ret =3D drm_dp_send_dpcd_write(mstb->mgr, > + port, > + DP_GUID, > + 16, port->guid); > + port->guid_valid =3D true; > + } > + } > +} > + > +static void build_mst_prop_path(struct drm_dp_mst_port *port, > + struct drm_dp_mst_branch *mstb, > + char *proppath) > +{ > + int i; > + char temp[8]; > + snprintf(proppath, 255, "mst:%d", mstb->mgr->conn_base_id); > + for (i =3D 0; i < (mstb->lct - 1); i++) { > + int shift =3D (i % 2) ? 0 : 4; > + int port_num =3D mstb->rad[i / 2] >> shift; > + snprintf(temp, 8, "-%d", port_num); > + strncat(proppath, temp, 255); > + } > + snprintf(temp, 8, "-%d", port->port_num); > + strncat(proppath, temp, 255); > +} > + > +static void drm_dp_add_port(struct drm_dp_mst_branch *mstb, > + struct device *dev, > + struct drm_dp_link_addr_reply_port *port_msg) > +{ > + struct drm_dp_mst_port *port; > + bool ret; > + bool created =3D false; > + int old_pdt =3D 0; > + int old_ddps =3D 0; > + port =3D drm_dp_get_port(mstb, port_msg->port_number); > + if (!port) { > + port =3D kzalloc(sizeof(*port), GFP_KERNEL); > + if (!port) > + return; > + kref_init(&port->kref); > + port->parent =3D mstb; > + port->port_num =3D port_msg->port_number; > + port->mgr =3D mstb->mgr; > + port->aux.name =3D "DPMST"; > + port->aux.dev =3D dev; > + created =3D true; > + } else { > + old_pdt =3D port->pdt; > + old_ddps =3D port->ddps; > + } > + > + port->pdt =3D port_msg->peer_device_type; > + port->input =3D port_msg->input_port; > + port->mcs =3D port_msg->mcs; > + port->ddps =3D port_msg->ddps; > + port->ldps =3D port_msg->legacy_device_plug_status; > + port->dpcd_rev =3D port_msg->dpcd_revision; > + > + memcpy(port->guid, port_msg->peer_guid, 16); > + > + /* manage mstb port lists with mgr lock - take a reference > + for this list */ > + if (created) { > + mutex_lock(&mstb->mgr->lock); > + kref_get(&port->kref); > + list_add(&port->next, &mstb->ports); > + mutex_unlock(&mstb->mgr->lock); > + } > + > + if (old_ddps !=3D port->ddps) { > + if (port->ddps) { > + drm_dp_check_port_guid(mstb, port); > + if (!port->input) > + drm_dp_send_enum_path_resources(mstb->mgr, mstb, port); > + } else { > + port->guid_valid =3D false; > + port->available_pbn =3D 0; > + } > + } > + > + if (old_pdt !=3D port->pdt && !port->input) { > + drm_dp_port_teardown_pdt(port, old_pdt); > + > + ret =3D drm_dp_port_setup_pdt(port); > + if (ret =3D=3D true) { > + drm_dp_send_link_address(mstb->mgr, port->mstb); > + port->mstb->link_address_sent =3D true; > + } > + } > + > + if (created && !port->input) { > + char proppath[255]; > + build_mst_prop_path(port, mstb, proppath); > + port->connector =3D (*mstb->mgr->cbs->add_connector)(mstb->mgr, port,= =20 > proppath); > + } > + > + /* put reference to this port */ > + drm_dp_put_port(port); > +} > + > +static void drm_dp_update_port(struct drm_dp_mst_branch *mstb, > + struct drm_dp_connection_status_notify *conn_stat) > +{ > + struct drm_dp_mst_port *port; > + int old_pdt; > + int old_ddps; > + bool dowork =3D false; > + port =3D drm_dp_get_port(mstb, conn_stat->port_number); > + if (!port) > + return; > + > + old_ddps =3D port->ddps; > + old_pdt =3D port->pdt; > + port->pdt =3D conn_stat->peer_device_type; > + port->mcs =3D conn_stat->message_capability_status; > + port->ldps =3D conn_stat->legacy_device_plug_status; > + port->ddps =3D conn_stat->displayport_device_plug_status; > + > + if (old_ddps !=3D port->ddps) { > + if (port->ddps) { > + drm_dp_check_port_guid(mstb, port); > + dowork =3D true; > + } else { > + port->guid_valid =3D false; > + port->available_pbn =3D 0; > + } > + } > + if (old_pdt !=3D port->pdt && !port->input) { > + drm_dp_port_teardown_pdt(port, old_pdt); > + > + if (drm_dp_port_setup_pdt(port)) > + dowork =3D true; > + } > + > + drm_dp_put_port(port); > + if (dowork) > + queue_work(system_long_wq, &mstb->mgr->work); > + > +} > + > +static struct drm_dp_mst_branch *drm_dp_get_mst_branch_device(struct=20 > drm_dp_mst_topology_mgr *mgr, > + u8 lct, u8 *rad) > +{ > + struct drm_dp_mst_branch *mstb; > + struct drm_dp_mst_port *port; > + int i; > + /* find the port by iterating down */ > + mstb =3D mgr->mst_primary; > + > + for (i =3D 0; i < lct - 1; i++) { > + int shift =3D (i % 2) ? 0 : 4; > + int port_num =3D rad[i / 2] >> shift; > + > + list_for_each_entry(port, &mstb->ports, next) { > + if (port->port_num =3D=3D port_num) { > + if (!port->mstb) { > + DRM_ERROR("failed to lookup MSTB with lct %d, rad %02x\n", lct, rad[0= ]); > + return NULL; > + } > + > + mstb =3D port->mstb; > + break; > + } > + } > + } > + kref_get(&mstb->kref); > + return mstb; > +} > + > +static void drm_dp_check_and_send_link_address(struct=20 > drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_mst_branch *mstb) > +{ > + struct drm_dp_mst_port *port; > + > + if (!mstb->link_address_sent) { > + drm_dp_send_link_address(mgr, mstb); > + mstb->link_address_sent =3D true; > + } > + list_for_each_entry(port, &mstb->ports, next) { > + if (port->input) > + continue; > + > + if (!port->ddps) > + continue; > + > + if (!port->available_pbn) > + drm_dp_send_enum_path_resources(mgr, mstb, port); > + > + if (port->mstb) > + drm_dp_check_and_send_link_address(mgr, port->mstb); > + } > +} > + > +static void drm_dp_mst_link_probe_work(struct work_struct *work) > +{ > + struct drm_dp_mst_topology_mgr *mgr =3D container_of(work, struct=20 > drm_dp_mst_topology_mgr, work); > + > + drm_dp_check_and_send_link_address(mgr, mgr->mst_primary); > + > +} > + > +static bool drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr, > + u8 *guid) > +{ > + static u8 zero_guid[16]; > + > + if (!memcmp(guid, zero_guid, 16)) { > + u64 salt =3D get_jiffies_64(); > + memcpy(&guid[0], &salt, sizeof(u64)); > + memcpy(&guid[8], &salt, sizeof(u64)); > + return false; > + } > + return true; > +} > + > +#if 0 > +static int build_dpcd_read(struct drm_dp_sideband_msg_tx *msg, u8=20 > port_num, u32 offset, u8 num_bytes) > +{ > + struct drm_dp_sideband_msg_req_body req; > + > + req.req_type =3D DP_REMOTE_DPCD_READ; > + req.u.dpcd_read.port_number =3D port_num; > + req.u.dpcd_read.dpcd_address =3D offset; > + req.u.dpcd_read.num_bytes =3D num_bytes; > + drm_dp_encode_sideband_req(&req, msg); > + > + return 0; > +} > +#endif > + > +static int drm_dp_send_sideband_msg(struct drm_dp_mst_topology_mgr *mg= r, > + bool up, u8 *msg, int len) > +{ > + int ret; > + int regbase =3D up ? DP_SIDEBAND_MSG_UP_REP_BASE :=20 > DP_SIDEBAND_MSG_DOWN_REQ_BASE; > + int tosend, total, offset; > + int retries =3D 0; > + > +retry: > + total =3D len; > + offset =3D 0; > + do { > + tosend =3D min3(mgr->max_dpcd_transaction_bytes, 16, total); > + > + mutex_lock(&mgr->aux_lock); > + ret =3D drm_dp_dpcd_write(mgr->aux, regbase + offset, > + &msg[offset], > + tosend); > + mutex_unlock(&mgr->aux_lock); > + if (ret !=3D tosend) { > + if (ret =3D=3D -EIO && retries < 5) { > + retries++; > + goto retry; > + } > + DRM_DEBUG_KMS("failed to dpcd write %d %d\n", tosend, ret); > + WARN(1, "fail\n"); > + > + return -EIO; > + } > + offset +=3D tosend; > + total -=3D tosend; > + } while (total > 0); > + return 0; > +} > + > +static int set_hdr_from_dst_qlock(struct drm_dp_sideband_msg_hdr *hdr, > + struct drm_dp_sideband_msg_tx *txmsg) > +{ > + struct drm_dp_mst_branch *mstb =3D txmsg->dst; > + > + /* both msg slots are full */ > + if (txmsg->seqno =3D=3D -1) { > + if (mstb->tx_slots[0] && mstb->tx_slots[1]) { > + DRM_DEBUG_KMS("%s: failed to find slot\n", __func__); > + return -EAGAIN; > + } > + if (mstb->tx_slots[0] =3D=3D NULL && mstb->tx_slots[1] =3D=3D NULL) { > + txmsg->seqno =3D mstb->last_seqno; > + mstb->last_seqno ^=3D 1; > + } else if (mstb->tx_slots[0] =3D=3D NULL) > + txmsg->seqno =3D 0; > + else > + txmsg->seqno =3D 1; > + mstb->tx_slots[txmsg->seqno] =3D txmsg; > + } > + hdr->broadcast =3D 0; > + hdr->path_msg =3D txmsg->path_msg; > + hdr->lct =3D mstb->lct; > + hdr->lcr =3D mstb->lct - 1; > + if (mstb->lct > 1) > + memcpy(hdr->rad, mstb->rad, mstb->lct / 2); > + hdr->seqno =3D txmsg->seqno; > + return 0; > +} > +/* > + * process a single block of the next message in the sideband queue > + */ > +static int process_single_tx_qlock(struct drm_dp_mst_topology_mgr *mgr= , > + struct drm_dp_sideband_msg_tx *txmsg, > + bool up) > +{ > + u8 chunk[48]; > + struct drm_dp_sideband_msg_hdr hdr; > + int len, space, idx, tosend; > + int ret; > + > + if (txmsg->state =3D=3D DRM_DP_SIDEBAND_TX_QUEUED) { > + txmsg->seqno =3D -1; > + txmsg->state =3D DRM_DP_SIDEBAND_TX_START_SEND; > + } > + > + /* make hdr from dst mst - for replies use seqno > + otherwise assign one */ > + ret =3D set_hdr_from_dst_qlock(&hdr, txmsg); > + if (ret < 0) > + return ret; > + > + /* amount left to send in this message */ > + len =3D txmsg->cur_len - txmsg->cur_offset; > + > + /* 48 - sideband msg size - 1 byte for data CRC, x header bytes */ > + space =3D 48 - 1 - drm_dp_calc_sb_hdr_size(&hdr); > + > + tosend =3D min(len, space); > + if (len =3D=3D txmsg->cur_len) > + hdr.somt =3D 1; > + if (space >=3D len) > + hdr.eomt =3D 1; > + > + > + hdr.msg_len =3D tosend + 1; > + drm_dp_encode_sideband_msg_hdr(&hdr, chunk, &idx); > + memcpy(&chunk[idx], &txmsg->msg[txmsg->cur_offset], tosend); > + /* add crc at end */ > + drm_dp_crc_sideband_chunk_req(&chunk[idx], tosend); > + idx +=3D tosend + 1; > + > + ret =3D drm_dp_send_sideband_msg(mgr, up, chunk, idx); > + if (ret) { > + DRM_DEBUG_KMS("sideband msg failed to send\n"); > + return ret; > + } > + > + txmsg->cur_offset +=3D tosend; > + if (txmsg->cur_offset =3D=3D txmsg->cur_len) { > + txmsg->state =3D DRM_DP_SIDEBAND_TX_SENT; > + return 1; > + } > + return 0; > +} > + > +/* must be called holding qlock */ > +static void process_single_down_tx_qlock(struct=20 > drm_dp_mst_topology_mgr *mgr) > +{ > + struct drm_dp_sideband_msg_tx *txmsg; > + int ret; > + > + /* construct a chunk from the first msg in the tx_msg queue */ > + if (list_empty(&mgr->tx_msg_downq)) { > + mgr->tx_down_in_progress =3D false; > + return; > + } > + mgr->tx_down_in_progress =3D true; > + > + txmsg =3D list_first_entry(&mgr->tx_msg_downq, struct=20 > drm_dp_sideband_msg_tx, next); > + ret =3D process_single_tx_qlock(mgr, txmsg, false); > + if (ret =3D=3D 1) { > + /* txmsg is sent it should be in the slots now */ > + list_del(&txmsg->next); > + } else if (ret) { > + DRM_DEBUG_KMS("failed to send msg in q %d\n", ret); > + list_del(&txmsg->next); > + if (txmsg->seqno !=3D -1) > + txmsg->dst->tx_slots[txmsg->seqno] =3D NULL; > + txmsg->state =3D DRM_DP_SIDEBAND_TX_TIMEOUT; > + wake_up(&mgr->tx_waitq); > + } > + if (list_empty(&mgr->tx_msg_downq)) { > + mgr->tx_down_in_progress =3D false; > + return; > + } > +} > + > +/* called holding qlock */ > +static void process_single_up_tx_qlock(struct drm_dp_mst_topology_mgr=20 > *mgr) > +{ > + struct drm_dp_sideband_msg_tx *txmsg; > + int ret; > + > + /* construct a chunk from the first msg in the tx_msg queue */ > + if (list_empty(&mgr->tx_msg_upq)) { > + mgr->tx_up_in_progress =3D false; > + return; > + } > + > + txmsg =3D list_first_entry(&mgr->tx_msg_upq, struct=20 > drm_dp_sideband_msg_tx, next); > + ret =3D process_single_tx_qlock(mgr, txmsg, true); > + if (ret =3D=3D 1) { > + /* up txmsgs aren't put in slots - so free after we send it */ > + list_del(&txmsg->next); > + kfree(txmsg); > + } else if (ret) > + DRM_DEBUG_KMS("failed to send msg in q %d\n", ret); > + mgr->tx_up_in_progress =3D true; > +} > + > +static void drm_dp_queue_down_tx(struct drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_sideband_msg_tx *txmsg) > +{ > + mutex_lock(&mgr->qlock); > + list_add_tail(&txmsg->next, &mgr->tx_msg_downq); > + if (!mgr->tx_down_in_progress) > + process_single_down_tx_qlock(mgr); > + mutex_unlock(&mgr->qlock); > +} > + > +static int drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mg= r, > + struct drm_dp_mst_branch *mstb) > +{ > + int len; > + struct drm_dp_sideband_msg_tx *txmsg; > + int ret; > + > + txmsg =3D kzalloc(sizeof(*txmsg), GFP_KERNEL); > + if (!txmsg) > + return -ENOMEM; > + > + txmsg->dst =3D mstb; > + len =3D build_link_address(txmsg); > + > + drm_dp_queue_down_tx(mgr, txmsg); > + > + ret =3D drm_dp_mst_wait_tx_reply(mstb, txmsg); > + if (ret > 0) { > + int i; > + > + if (txmsg->reply.reply_type =3D=3D 1) > + DRM_DEBUG_KMS("link address nak received\n"); > + else { > + DRM_DEBUG_KMS("link address reply: %d\n",=20 > txmsg->reply.u.link_addr.nports); > + for (i =3D 0; i < txmsg->reply.u.link_addr.nports; i++) { > + DRM_DEBUG_KMS("port %d: input %d, pdt: %d, pn: %d, dpcd_rev: %02x,=20 > mcs: %d, ddps: %d, ldps %d\n", i, > + txmsg->reply.u.link_addr.ports[i].input_port, > + txmsg->reply.u.link_addr.ports[i].peer_device_type, > + txmsg->reply.u.link_addr.ports[i].port_number, > + txmsg->reply.u.link_addr.ports[i].dpcd_revision, > + txmsg->reply.u.link_addr.ports[i].mcs, > + txmsg->reply.u.link_addr.ports[i].ddps, > + txmsg->reply.u.link_addr.ports[i].legacy_device_plug_status); > + } > + for (i =3D 0; i < txmsg->reply.u.link_addr.nports; i++) { > + drm_dp_add_port(mstb, mgr->dev, &txmsg->reply.u.link_addr.ports[i]); > + } > + (*mgr->cbs->hotplug)(mgr); > + } > + } else > + DRM_DEBUG_KMS("link address failed %d\n", ret); > + > + kfree(txmsg); > + return 0; > +} > + > +static int drm_dp_send_enum_path_resources(struct=20 > drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_mst_branch *mstb, > + struct drm_dp_mst_port *port) > +{ > + int len; > + struct drm_dp_sideband_msg_tx *txmsg; > + int ret; > + > + txmsg =3D kzalloc(sizeof(*txmsg), GFP_KERNEL); > + if (!txmsg) > + return -ENOMEM; > + > + txmsg->dst =3D mstb; > + len =3D build_enum_path_resources(txmsg, port->port_num); > + > + drm_dp_queue_down_tx(mgr, txmsg); > + > + ret =3D drm_dp_mst_wait_tx_reply(mstb, txmsg); > + if (ret > 0) { > + if (txmsg->reply.reply_type =3D=3D 1) > + DRM_DEBUG_KMS("enum path resources nak received\n"); > + else { > + if (port->port_num !=3D txmsg->reply.u.path_resources.port_number) > + DRM_ERROR("got incorrect port in response\n"); > + DRM_DEBUG_KMS("enum path resources %d: %d %d\n",=20 > txmsg->reply.u.path_resources.port_number,=20 > txmsg->reply.u.path_resources.full_payload_bw_number, > + txmsg->reply.u.path_resources.avail_payload_bw_number); > + port->available_pbn =3D=20 > txmsg->reply.u.path_resources.avail_payload_bw_number; > + } > + } > + > + kfree(txmsg); > + return 0; > +} > + > +int drm_dp_payload_send_msg(struct drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_mst_port *port, > + int id, > + int pbn) > +{ > + struct drm_dp_sideband_msg_tx *txmsg; > + struct drm_dp_mst_branch *mstb; > + int len, ret; > + > + mstb =3D drm_dp_get_validated_mstb_ref(mgr, port->parent); > + if (!mstb) > + return -EINVAL; > + > + txmsg =3D kzalloc(sizeof(*txmsg), GFP_KERNEL); > + if (!txmsg) { > + ret =3D -ENOMEM; > + goto fail_put; > + } > + > + txmsg->dst =3D mstb; > + len =3D build_allocate_payload(txmsg, port->port_num, > + id, > + pbn); > + > + drm_dp_queue_down_tx(mgr, txmsg); > + > + ret =3D drm_dp_mst_wait_tx_reply(mstb, txmsg); > + if (ret > 0) { > + if (txmsg->reply.reply_type =3D=3D 1) { > + ret =3D -EINVAL; > + } else > + ret =3D 0; > + } > + kfree(txmsg); > +fail_put: > + drm_dp_put_mst_branch_device(mstb); > + return ret; > +} > + > +static int drm_dp_create_payload_step1(struct drm_dp_mst_topology_mgr=20 > *mgr, > + int id, > + struct drm_dp_payload *payload) > +{ > + int ret; > + > + ret =3D drm_dp_dpcd_write_payload(mgr, id, payload); > + if (ret < 0) { > + payload->payload_state =3D 0; > + return ret; > + } > + payload->payload_state =3D DP_PAYLOAD_LOCAL; > + return 0; > +} > + > +int drm_dp_create_payload_step2(struct drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_mst_port *port, > + int id, > + struct drm_dp_payload *payload) > +{ > + int ret; > + ret =3D drm_dp_payload_send_msg(mgr, port, id, port->vcpi.pbn); > + if (ret < 0) > + return ret; > + payload->payload_state =3D DP_PAYLOAD_REMOTE; > + return ret; > +} > + > +int drm_dp_destroy_payload_step1(struct drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_mst_port *port, > + int id, > + struct drm_dp_payload *payload) > +{ > + DRM_DEBUG_KMS("\n"); > + /* its okay for these to fail */ > + if (port) { > + drm_dp_payload_send_msg(mgr, port, id, 0); > + } > + > + drm_dp_dpcd_write_payload(mgr, id, payload); > + payload->payload_state =3D 0; > + return 0; > +} > + > +int drm_dp_destroy_payload_step2(struct drm_dp_mst_topology_mgr *mgr, > + int id, > + struct drm_dp_payload *payload) > +{ > + payload->payload_state =3D 0; > + return 0; > +} > + > +/** > + * drm_dp_update_payload_part1() - Execute payload update part 1 > + * @mgr: manager to use. > + * > + * This iterates over all proposed virtual channels, and tries to > + * allocate space in the link for them. For 0->slots transitions, > + * this step just writes the VCPI to the MST device. For slots->0 > + * transitions, this writes the updated VCPIs and removes the > + * remote VC payloads. > + * > + * after calling this the driver should generate ACT and payload > + * packets. > + */ > +int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr) > +{ > + int i; > + int cur_slots =3D 1; > + struct drm_dp_payload req_payload; > + struct drm_dp_mst_port *port; > + > + mutex_lock(&mgr->payload_lock); > + for (i =3D 0; i < mgr->max_payloads; i++) { > + /* solve the current payloads - compare to the hw ones > + - update the hw view */ > + req_payload.start_slot =3D cur_slots; > + if (mgr->proposed_vcpis[i]) { > + port =3D container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port,= =20 > vcpi); > + req_payload.num_slots =3D mgr->proposed_vcpis[i]->num_slots; > + } else { > + port =3D NULL; > + req_payload.num_slots =3D 0; > + } > + /* work out what is required to happen with this payload */ > + if (mgr->payloads[i].start_slot !=3D req_payload.start_slot || > + mgr->payloads[i].num_slots !=3D req_payload.num_slots) { > + > + /* need to push an update for this payload */ > + if (req_payload.num_slots) { > + drm_dp_create_payload_step1(mgr, i + 1, &req_payload); > + mgr->payloads[i].num_slots =3D req_payload.num_slots; > + } else if (mgr->payloads[i].num_slots) { > + mgr->payloads[i].num_slots =3D 0; > + drm_dp_destroy_payload_step1(mgr, port, i + 1, &mgr->payloads[i]); > + req_payload.payload_state =3D mgr->payloads[i].payload_state; > + } > + mgr->payloads[i].start_slot =3D req_payload.start_slot; > + mgr->payloads[i].payload_state =3D req_payload.payload_state; > + } > + cur_slots +=3D req_payload.num_slots; > + } > + mutex_unlock(&mgr->payload_lock); > + > + return 0; > +} > +EXPORT_SYMBOL(drm_dp_update_payload_part1); > + > +/** > + * drm_dp_update_payload_part2() - Execute payload update part 2 > + * @mgr: manager to use. > + * > + * This iterates over all proposed virtual channels, and tries to > + * allocate space in the link for them. For 0->slots transitions, > + * this step writes the remote VC payload commands. For slots->0 > + * this just resets some internal state. > + */ > +int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr) > +{ > + struct drm_dp_mst_port *port; > + int i; > + int ret; > + mutex_lock(&mgr->payload_lock); > + for (i =3D 0; i < mgr->max_payloads; i++) { > + > + if (!mgr->proposed_vcpis[i]) > + continue; > + > + port =3D container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port,= =20 > vcpi); > + > + DRM_DEBUG_KMS("payload %d %d\n", i, mgr->payloads[i].payload_state); > + if (mgr->payloads[i].payload_state =3D=3D DP_PAYLOAD_LOCAL) { > + ret =3D drm_dp_create_payload_step2(mgr, port, i + 1, &mgr->payloads[= i]); > + } else if (mgr->payloads[i].payload_state =3D=3D DP_PAYLOAD_DELETE_LO= CAL) { > + ret =3D drm_dp_destroy_payload_step2(mgr, i + 1, &mgr->payloads[i]); > + } > + if (ret) { > + mutex_unlock(&mgr->payload_lock); > + return ret; > + } > + } > + mutex_unlock(&mgr->payload_lock); > + return 0; > +} > +EXPORT_SYMBOL(drm_dp_update_payload_part2); > + > +#if 0 /* unused as of yet */ > +static int drm_dp_send_dpcd_read(struct drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_mst_port *port, > + int offset, int size) > +{ > + int len; > + struct drm_dp_sideband_msg_tx *txmsg; > + > + txmsg =3D kzalloc(sizeof(*txmsg), GFP_KERNEL); > + if (!txmsg) > + return -ENOMEM; > + > + len =3D build_dpcd_read(txmsg, port->port_num, 0, 8); > + txmsg->dst =3D port->parent; > + > + drm_dp_queue_down_tx(mgr, txmsg); > + > + return 0; > +} > +#endif > + > +static int drm_dp_send_dpcd_write(struct drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_mst_port *port, > + int offset, int size, u8 *bytes) > +{ > + int len; > + int ret; > + struct drm_dp_sideband_msg_tx *txmsg; > + struct drm_dp_mst_branch *mstb; > + > + mstb =3D drm_dp_get_validated_mstb_ref(mgr, port->parent); > + if (!mstb) > + return -EINVAL; > + > + txmsg =3D kzalloc(sizeof(*txmsg), GFP_KERNEL); > + if (!txmsg) { > + ret =3D -ENOMEM; > + goto fail_put; > + } > + > + len =3D build_dpcd_write(txmsg, port->port_num, offset, size, bytes); > + txmsg->dst =3D mstb; > + > + drm_dp_queue_down_tx(mgr, txmsg); > + > + ret =3D drm_dp_mst_wait_tx_reply(mstb, txmsg); > + if (ret > 0) { > + if (txmsg->reply.reply_type =3D=3D 1) { > + ret =3D -EINVAL; > + } else > + ret =3D 0; > + } > + kfree(txmsg); > +fail_put: > + drm_dp_put_mst_branch_device(mstb); > + return ret; > +} > + > +static int drm_dp_encode_up_ack_reply(struct drm_dp_sideband_msg_tx=20 > *msg, u8 req_type) > +{ > + struct drm_dp_sideband_msg_reply_body reply; > + > + reply.reply_type =3D 1; > + reply.req_type =3D req_type; > + drm_dp_encode_sideband_reply(&reply, msg); > + return 0; > +} > + > +static int drm_dp_send_up_ack_reply(struct drm_dp_mst_topology_mgr *mg= r, > + struct drm_dp_mst_branch *mstb, > + int req_type, int seqno, bool broadcast) > +{ > + struct drm_dp_sideband_msg_tx *txmsg; > + > + txmsg =3D kzalloc(sizeof(*txmsg), GFP_KERNEL); > + if (!txmsg) > + return -ENOMEM; > + > + txmsg->dst =3D mstb; > + txmsg->seqno =3D seqno; > + drm_dp_encode_up_ack_reply(txmsg, req_type); > + > + mutex_lock(&mgr->qlock); > + list_add_tail(&txmsg->next, &mgr->tx_msg_upq); > + if (!mgr->tx_up_in_progress) { > + process_single_up_tx_qlock(mgr); > + } > + mutex_unlock(&mgr->qlock); > + return 0; > +} > + > +static int drm_dp_get_vc_payload_bw(int dp_link_bw, int dp_link_count) > +{ > + switch (dp_link_bw) { > + case DP_LINK_BW_1_62: > + return 3 * dp_link_count; > + case DP_LINK_BW_2_7: > + return 5 * dp_link_count; > + case DP_LINK_BW_5_4: > + return 10 * dp_link_count; > + } > + return 0; > +} > + > +/** > + * drm_dp_mst_topology_mgr_set_mst() - Set the MST state for a=20 > topology manager > + * @mgr: manager to set state for > + * @mst_state: true to enable MST on this connector - false to disable= . > + * > + * This is called by the driver when it detects an MST capable device=20 > plugged > + * into a DP MST capable port, or when a DP MST capable device is=20 > unplugged. > + */ > +int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr=20 > *mgr, bool mst_state) > +{ > + int ret =3D 0; > + struct drm_dp_mst_branch *mstb =3D NULL; > + > + mutex_lock(&mgr->lock); > + if (mst_state =3D=3D mgr->mst_state) > + goto out_unlock; > + > + mgr->mst_state =3D mst_state; > + /* set the device into MST mode */ > + if (mst_state) { > + WARN_ON(mgr->mst_primary); > + > + /* get dpcd info */ > + mutex_lock(&mgr->aux_lock); > + ret =3D drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd,=20 > DP_RECEIVER_CAP_SIZE); > + mutex_unlock(&mgr->aux_lock); > + if (ret !=3D DP_RECEIVER_CAP_SIZE) { > + DRM_DEBUG_KMS("failed to read DPCD\n"); > + goto out_unlock; > + } > + > + mgr->pbn_div =3D drm_dp_get_vc_payload_bw(mgr->dpcd[1], mgr->dpcd[2] = &=20 > DP_MAX_LANE_COUNT_MASK); > + mgr->total_pbn =3D 2560; > + mgr->total_slots =3D DIV_ROUND_UP(mgr->total_pbn, mgr->pbn_div); > + mgr->avail_slots =3D mgr->total_slots; > + > + /* add initial branch device at LCT 1 */ > + mstb =3D drm_dp_add_mst_branch_device(1, NULL); > + if (mstb =3D=3D NULL) { > + ret =3D -ENOMEM; > + goto out_unlock; > + } > + mstb->mgr =3D mgr; > + > + /* give this the main reference */ > + mgr->mst_primary =3D mstb; > + kref_get(&mgr->mst_primary->kref); > + > + { > + struct drm_dp_payload reset_pay; > + reset_pay.start_slot =3D 0; > + reset_pay.num_slots =3D 0x3f; > + drm_dp_dpcd_write_payload(mgr, 0, &reset_pay); > + } > + > + mutex_lock(&mgr->aux_lock); > + ret =3D drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, > + DP_MST_EN | DP_UP_REQ_EN | DP_UPSTREAM_IS_SRC); > + mutex_unlock(&mgr->aux_lock); > + if (ret < 0) { > + goto out_unlock; > + } > + > + > + /* sort out guid */ > + mutex_lock(&mgr->aux_lock); > + ret =3D drm_dp_dpcd_read(mgr->aux, DP_GUID, mgr->guid, 16); > + mutex_unlock(&mgr->aux_lock); > + if (ret !=3D 16) { > + DRM_DEBUG_KMS("failed to read DP GUID %d\n", ret); > + goto out_unlock; > + } > + > + mgr->guid_valid =3D drm_dp_validate_guid(mgr, mgr->guid); > + if (!mgr->guid_valid) { > + ret =3D drm_dp_dpcd_write(mgr->aux, DP_GUID, mgr->guid, 16); > + mgr->guid_valid =3D true; > + } > + > + queue_work(system_long_wq, &mgr->work); > + > + ret =3D 0; > + } else { > + /* disable MST on the device */ > + mstb =3D mgr->mst_primary; > + mgr->mst_primary =3D NULL; > + /* this can fail if the device is gone */ > + drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, 0); > + ret =3D 0; > + memset(mgr->payloads, 0, mgr->max_payloads * sizeof(struct=20 > drm_dp_payload)); > + mgr->payload_mask =3D 0; > + set_bit(0, &mgr->payload_mask); > + } > + > +out_unlock: > + mutex_unlock(&mgr->lock); > + if (mstb) > + drm_dp_put_mst_branch_device(mstb); > + return ret; > + > +} > +EXPORT_SYMBOL(drm_dp_mst_topology_mgr_set_mst); > + > +/** > + * drm_dp_mst_topology_mgr_suspend() - suspend the MST manager > + * @mgr: manager to suspend > + * > + * This function tells the MST device that we can't handle UP messages > + * anymore. This should stop it from sending any since we are suspende= d. > + */ > +void drm_dp_mst_topology_mgr_suspend(struct drm_dp_mst_topology_mgr *m= gr) > +{ > + mutex_lock(&mgr->lock); > + mutex_lock(&mgr->aux_lock); > + drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, > + DP_MST_EN | DP_UPSTREAM_IS_SRC); > + mutex_unlock(&mgr->aux_lock); > + mutex_unlock(&mgr->lock); > +} > +EXPORT_SYMBOL(drm_dp_mst_topology_mgr_suspend); > + > +/** > + * drm_dp_mst_topology_mgr_resume() - resume the MST manager > + * @mgr: manager to resume > + * > + * This will fetch DPCD and see if the device is still there, > + * if it is, it will rewrite the MSTM control bits, and return. > + * > + * if the device fails this returns -1, and the driver should do > + * a full MST reprobe, in case we were undocked. > + */ > +int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr= ) > +{ > + int ret =3D 0; > + > + mutex_lock(&mgr->lock); > + > + if (mgr->mst_primary) { > + int sret; > + mutex_lock(&mgr->aux_lock); > + sret =3D drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd,=20 > DP_RECEIVER_CAP_SIZE); > + mutex_unlock(&mgr->aux_lock); > + if (sret !=3D DP_RECEIVER_CAP_SIZE) { > + DRM_DEBUG_KMS("dpcd read failed - undocked during suspend?\n"); > + ret =3D -1; > + goto out_unlock; > + } > + > + mutex_lock(&mgr->aux_lock); > + ret =3D drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, > + DP_MST_EN | DP_UP_REQ_EN | DP_UPSTREAM_IS_SRC); > + mutex_unlock(&mgr->aux_lock); > + if (ret < 0) { > + DRM_DEBUG_KMS("mst write failed - undocked during suspend?\n"); > + ret =3D -1; > + goto out_unlock; > + } > + ret =3D 0; > + } else > + ret =3D -1; > + > +out_unlock: > + mutex_unlock(&mgr->lock); > + return ret; > +} > +EXPORT_SYMBOL(drm_dp_mst_topology_mgr_resume); > + > +static void drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr=20 > *mgr, bool up) > +{ > + int len; > + u8 replyblock[32]; > + int replylen, origlen, curreply; > + int ret; > + struct drm_dp_sideband_msg_rx *msg; > + int basereg =3D up ? DP_SIDEBAND_MSG_UP_REQ_BASE :=20 > DP_SIDEBAND_MSG_DOWN_REP_BASE; > + msg =3D up ? &mgr->up_req_recv : &mgr->down_rep_recv; > + > + len =3D min(mgr->max_dpcd_transaction_bytes, 16); > + mutex_lock(&mgr->aux_lock); > + ret =3D drm_dp_dpcd_read(mgr->aux, basereg, > + replyblock, len); > + mutex_unlock(&mgr->aux_lock); > + if (ret !=3D len) { > + DRM_DEBUG_KMS("failed to read DPCD down rep %d %d\n", len, ret); > + return; > + } > + ret =3D drm_dp_sideband_msg_build(msg, replyblock, len, true); > + if (!ret) { > + DRM_DEBUG_KMS("sideband msg build failed %d\n", replyblock[0]); > + return; > + } > + replylen =3D msg->curchunk_len + msg->curchunk_hdrlen; > + > + origlen =3D replylen; > + replylen -=3D len; > + curreply =3D len; > + while (replylen > 0) { > + len =3D min3(replylen, mgr->max_dpcd_transaction_bytes, 16); > + mutex_lock(&mgr->aux_lock); > + ret =3D drm_dp_dpcd_read(mgr->aux, basereg + curreply, > + replyblock, len); > + mutex_unlock(&mgr->aux_lock); > + if (ret !=3D len) { > + DRM_DEBUG_KMS("failed to read a chunk\n"); > + } > + ret =3D drm_dp_sideband_msg_build(msg, replyblock, len, false); > + if (ret =3D=3D false) > + DRM_DEBUG_KMS("failed to build sideband msg\n"); > + curreply +=3D len; > + replylen -=3D len; > + } > +} > + > +static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr=20 > *mgr) > +{ > + int ret =3D 0; > + > + drm_dp_get_one_sb_msg(mgr, false); > + > + if (mgr->down_rep_recv.have_eomt) { > + struct drm_dp_sideband_msg_tx *txmsg; > + struct drm_dp_mst_branch *mstb; > + int slot =3D -1; > + mstb =3D drm_dp_get_mst_branch_device(mgr, > + mgr->down_rep_recv.initial_hdr.lct, > + mgr->down_rep_recv.initial_hdr.rad); > + > + if (!mstb) { > + DRM_DEBUG_KMS("Got MST reply from unknown device %d\n",=20 > mgr->down_rep_recv.initial_hdr.lct); > + memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx))= ; > + return 0; > + } > + > + /* find the message */ > + slot =3D mgr->down_rep_recv.initial_hdr.seqno; > + mutex_lock(&mgr->qlock); > + txmsg =3D mstb->tx_slots[slot]; > + /* remove from slots */ > + mutex_unlock(&mgr->qlock); > + > + if (!txmsg) { > + DRM_DEBUG_KMS("Got MST reply with no msg %p %d %d %02x %02x\n", > + mstb, > + mgr->down_rep_recv.initial_hdr.seqno, > + mgr->down_rep_recv.initial_hdr.lct, > + mgr->down_rep_recv.initial_hdr.rad[0], > + mgr->down_rep_recv.msg[0]); > + drm_dp_put_mst_branch_device(mstb); > + memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx))= ; > + return 0; > + } > + > + drm_dp_sideband_parse_reply(&mgr->down_rep_recv, &txmsg->reply); > + if (txmsg->reply.reply_type =3D=3D 1) { > + DRM_DEBUG_KMS("Got NAK reply: req 0x%02x, reason 0x%02x, nak data=20 > 0x%02x\n", txmsg->reply.req_type, txmsg->reply.u.nak.reason,=20 > txmsg->reply.u.nak.nak_data); > + } > + > + memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx))= ; > + drm_dp_put_mst_branch_device(mstb); > + > + mutex_lock(&mgr->qlock); > + txmsg->state =3D DRM_DP_SIDEBAND_TX_RX; > + mstb->tx_slots[slot] =3D NULL; > + mutex_unlock(&mgr->qlock); > + > + wake_up(&mgr->tx_waitq); > + } > + return ret; > +} > + > +static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mg= r) > +{ > + int ret =3D 0; > + drm_dp_get_one_sb_msg(mgr, true); > + > + if (mgr->up_req_recv.have_eomt) { > + struct drm_dp_sideband_msg_req_body msg; > + struct drm_dp_mst_branch *mstb; > + bool seqno; > + mstb =3D drm_dp_get_mst_branch_device(mgr, > + mgr->up_req_recv.initial_hdr.lct, > + mgr->up_req_recv.initial_hdr.rad); > + if (!mstb) { > + DRM_DEBUG_KMS("Got MST reply from unknown device %d\n",=20 > mgr->up_req_recv.initial_hdr.lct); > + memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx)); > + return 0; > + } > + > + seqno =3D mgr->up_req_recv.initial_hdr.seqno; > + drm_dp_sideband_parse_req(&mgr->up_req_recv, &msg); > + > + if (msg.req_type =3D=3D DP_CONNECTION_STATUS_NOTIFY) { > + drm_dp_send_up_ack_reply(mgr, mstb, msg.req_type, seqno, false); > + drm_dp_update_port(mstb, &msg.u.conn_stat); > + DRM_DEBUG_KMS("Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt:=20 > %d\n", msg.u.conn_stat.port_number,=20 > msg.u.conn_stat.legacy_device_plug_status,=20 > msg.u.conn_stat.displayport_device_plug_status,=20 > msg.u.conn_stat.message_capability_status, msg.u.conn_stat.input_port,=20 > msg.u.conn_stat.peer_device_type); > + (*mgr->cbs->hotplug)(mgr); > + > + } else if (msg.req_type =3D=3D DP_RESOURCE_STATUS_NOTIFY) { > + drm_dp_send_up_ack_reply(mgr, mstb, msg.req_type, seqno, false); > + DRM_DEBUG_KMS("Got RSN: pn: %d avail_pbn %d\n",=20 > msg.u.resource_stat.port_number, msg.u.resource_stat.available_pbn); > + } > + > + drm_dp_put_mst_branch_device(mstb); > + memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx)); > + } > + return ret; > +} > + > +/** > + * drm_dp_mst_hpd_irq() - MST hotplug IRQ notify > + * @mgr: manager to notify irq for. > + * @esi: 4 bytes from SINK_COUNT_ESI > + * > + * This should be called from the driver when it detects a short IRQ, > + * along with the value of the DEVICE_SERVICE_IRQ_VECTOR_ESI0. The > + * topology manager will process the sideband messages received as a=20 > result > + * of this. > + */ > +int drm_dp_mst_hpd_irq(struct drm_dp_mst_topology_mgr *mgr, u8 *esi,=20 > bool *handled) > +{ > + int ret =3D 0; > + int sc; > + *handled =3D false; > + sc =3D esi[0] & 0x3f; > + if (sc !=3D mgr->sink_count) { > + > + if (mgr->mst_primary && mgr->sink_count =3D=3D 0 && sc) { > + mgr->mst_primary->link_address_sent =3D false; > + queue_work(system_long_wq, &mgr->work); > + } > + mgr->sink_count =3D sc; > + *handled =3D true; > + > + } > + > + if (esi[1] & DP_DOWN_REP_MSG_RDY) { > + ret =3D drm_dp_mst_handle_down_rep(mgr); > + *handled =3D true; > + } > + > + if (esi[1] & DP_UP_REQ_MSG_RDY) { > + ret |=3D drm_dp_mst_handle_up_req(mgr); > + *handled =3D true; > + } > + > + drm_dp_mst_kick_tx(mgr); > + return ret; > +} > +EXPORT_SYMBOL(drm_dp_mst_hpd_irq); > + > +/** > + * drm_dp_mst_detect_port() - get connection status for an MST port > + * @mgr: manager for this port > + * @port: unverified pointer to a port > + * > + * This returns the current connection state for a port. It validates = the > + * port pointer still exists so the caller doesn't require a reference > + */ > +enum drm_connector_status drm_dp_mst_detect_port(struct=20 > drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port) > +{ > + enum drm_connector_status status =3D connector_status_disconnected; > + > + /* we need to search for the port in the mgr in case its gone */ > + port =3D drm_dp_get_validated_port_ref(mgr, port); > + if (!port) > + return connector_status_disconnected; > + > + if (!port->ddps) > + goto out; > + > + switch (port->pdt) { > + case DP_PEER_DEVICE_NONE: > + case DP_PEER_DEVICE_MST_BRANCHING: > + break; > + > + case DP_PEER_DEVICE_SST_SINK: > + status =3D connector_status_connected; > + break; > + case DP_PEER_DEVICE_DP_LEGACY_CONV: > + if (port->ldps) > + status =3D connector_status_connected; > + break; > + } > +out: > + drm_dp_put_port(port); > + return status; > +} > +EXPORT_SYMBOL(drm_dp_mst_detect_port); > + > +/** > + * drm_dp_mst_get_edid() - get EDID for an MST port > + * @connector: toplevel connector to get EDID for > + * @mgr: manager for this port > + * @port: unverified pointer to a port. > + * > + * This returns an EDID for the port connected to a connector, > + * It validates the pointer still exists so the caller doesn't require= a > + * reference. > + */ > +struct edid *drm_dp_mst_get_edid(struct drm_connector *connector,=20 > struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port) > +{ > + struct edid *edid =3D NULL; > + > + /* we need to search for the port in the mgr in case its gone */ > + port =3D drm_dp_get_validated_port_ref(mgr, port); > + if (!port) > + return NULL; > + > + edid =3D drm_get_edid(connector, &port->aux.ddc); > + drm_dp_put_port(port); > + return edid; > +} > +EXPORT_SYMBOL(drm_dp_mst_get_edid); > + > +/** > + * drm_dp_find_vcpi_slots() - find slots for this PBN value > + * @mgr: manager to use > + * @pbn: payload bandwidth to convert into slots. > + */ > +int drm_dp_find_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, > + int pbn) > +{ > + int num_slots; > + > + num_slots =3D DIV_ROUND_UP(pbn, mgr->pbn_div); > + > + if (num_slots > mgr->avail_slots) > + return -ENOSPC; > + return num_slots; > +} > +EXPORT_SYMBOL(drm_dp_find_vcpi_slots); > + > +static int drm_dp_init_vcpi(struct drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_vcpi *vcpi, int pbn) > +{ > + int num_slots; > + int ret; > + > + num_slots =3D DIV_ROUND_UP(pbn, mgr->pbn_div); > + > + if (num_slots > mgr->avail_slots) > + return -ENOSPC; > + > + vcpi->pbn =3D pbn; > + vcpi->aligned_pbn =3D num_slots * mgr->pbn_div; > + vcpi->num_slots =3D num_slots; > + > + ret =3D drm_dp_mst_assign_payload_id(mgr, vcpi); > + if (ret < 0) > + return ret; > + return 0; > +} > + > +/** > + * drm_dp_mst_allocate_vcpi() - Allocate a virtual channel > + * @mgr: manager for this port > + * @port: port to allocate a virtual channel for. > + * @pbn: payload bandwidth number to request > + * @slots: returned number of slots for this PBN. > + */ > +bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,=20 > struct drm_dp_mst_port *port, int pbn, int *slots) > +{ > + int ret; > + > + port =3D drm_dp_get_validated_port_ref(mgr, port); > + if (!port) > + return false; > + > + if (port->vcpi.vcpi > 0) { > + DRM_DEBUG_KMS("payload: vcpi %d already allocated for pbn %d -=20 > requested pbn %d\n", port->vcpi.vcpi, port->vcpi.pbn, pbn); > + if (pbn =3D=3D port->vcpi.pbn) { > + *slots =3D port->vcpi.num_slots; > + return true; > + } > + } > + > + ret =3D drm_dp_init_vcpi(mgr, &port->vcpi, pbn); > + if (ret) { > + DRM_DEBUG_KMS("failed to init vcpi %d %d %d\n", DIV_ROUND_UP(pbn,=20 > mgr->pbn_div), mgr->avail_slots, ret); > + goto out; > + } > + DRM_DEBUG_KMS("initing vcpi for %d %d\n", pbn, port->vcpi.num_slots); > + *slots =3D port->vcpi.num_slots; > + > + drm_dp_put_port(port); > + return true; > +out: > + return false; > +} > +EXPORT_SYMBOL(drm_dp_mst_allocate_vcpi); > + > +/** > + * drm_dp_mst_reset_vcpi_slots() - Reset number of slots to 0 for VCPI > + * @mgr: manager for this port > + * @port: unverified pointer to a port. > + * > + * This just resets the number of slots for the ports VCPI for later=20 > programming. > + */ > +void drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr,=20 > struct drm_dp_mst_port *port) > +{ > + port =3D drm_dp_get_validated_port_ref(mgr, port); > + if (!port) > + return; > + port->vcpi.num_slots =3D 0; > + drm_dp_put_port(port); > +} > +EXPORT_SYMBOL(drm_dp_mst_reset_vcpi_slots); > + > +/** > + * drm_dp_mst_deallocate_vcpi() - deallocate a VCPI > + * @mgr: manager for this port > + * @port: unverified port to deallocate vcpi for > + */ > +void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,=20 > struct drm_dp_mst_port *port) > +{ > + port =3D drm_dp_get_validated_port_ref(mgr, port); > + if (!port) > + return; > + > + drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi); > + port->vcpi.num_slots =3D 0; > + port->vcpi.pbn =3D 0; > + port->vcpi.aligned_pbn =3D 0; > + port->vcpi.vcpi =3D 0; > + drm_dp_put_port(port); > +} > +EXPORT_SYMBOL(drm_dp_mst_deallocate_vcpi); > + > +static int drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *m= gr, > + int id, struct drm_dp_payload *payload) > +{ > + u8 payload_alloc[3], status; > + int ret; > + int retries =3D 0; > + > + mutex_lock(&mgr->aux_lock); > + drm_dp_dpcd_writeb(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, > + DP_PAYLOAD_TABLE_UPDATED); > + mutex_unlock(&mgr->aux_lock); > + > + payload_alloc[0] =3D id; > + payload_alloc[1] =3D payload->start_slot; > + payload_alloc[2] =3D payload->num_slots; > + > + mutex_lock(&mgr->aux_lock); > + ret =3D drm_dp_dpcd_write(mgr->aux, DP_PAYLOAD_ALLOCATE_SET,=20 > payload_alloc, 3); > + mutex_unlock(&mgr->aux_lock); > + if (ret !=3D 3) { > + DRM_DEBUG_KMS("failed to write payload allocation %d\n", ret); > + goto fail; > + } > + > +retry: > + mutex_lock(&mgr->aux_lock); > + ret =3D drm_dp_dpcd_readb(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS,=20 > &status); > + mutex_unlock(&mgr->aux_lock); > + if (ret < 0) { > + DRM_DEBUG_KMS("failed to read payload table status %d\n", ret); > + goto fail; > + } > + > + if (!(status & DP_PAYLOAD_TABLE_UPDATED)) { > + retries++; > + if (retries < 20) { > + usleep_range(10000, 20000); > + goto retry; > + } > + DRM_DEBUG_KMS("status not set after read payload table status %d\n",=20 > status); > + ret =3D -EINVAL; > + goto fail; > + } > + ret =3D 0; > +fail: > + return ret; > +} > + > + > +/** > + * drm_dp_check_act_status() - Check ACT handled status. > + * @mgr: manager to use > + * > + * Check the payload status bits in the DPCD for ACT handled completio= n. > + */ > +int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr) > +{ > + u8 status; > + int ret; > + int count =3D 0; > + > + do { > + mutex_lock(&mgr->aux_lock); > + ret =3D drm_dp_dpcd_readb(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS,=20 > &status); > + mutex_unlock(&mgr->aux_lock); > + > + if (ret < 0) { > + DRM_DEBUG_KMS("failed to read payload table status %d\n", ret); > + goto fail; > + } > + > + if (status & DP_PAYLOAD_ACT_HANDLED) > + break; > + count++; > + udelay(100); > + > + } while (count < 30); > + > + if (!(status & DP_PAYLOAD_ACT_HANDLED)) { > + DRM_DEBUG_KMS("failed to get ACT bit %d after %d retries\n", status,=20 > count); > + ret =3D -EINVAL; > + goto fail; > + } > + return 0; > +fail: > + return ret; > +} > +EXPORT_SYMBOL(drm_dp_check_act_status); > + > +/** > + * drm_dp_calc_pbn_mode() - Calculate the PBN for a mode. > + * @clock: dot clock for the mode > + * @bpp: bpp for the mode. > + * > + * This uses the formula in the spec to calculate the PBN value for a=20 > mode. > + */ > +int drm_dp_calc_pbn_mode(int clock, int bpp) > +{ > + fixed20_12 pix_bw; > + fixed20_12 fbpp; > + fixed20_12 result; > + fixed20_12 margin, tmp; > + u32 res; > + > + pix_bw.full =3D dfixed_const(clock); > + fbpp.full =3D dfixed_const(bpp); > + tmp.full =3D dfixed_const(8); > + fbpp.full =3D dfixed_div(fbpp, tmp); > + > + result.full =3D dfixed_mul(pix_bw, fbpp); > + margin.full =3D dfixed_const(54); > + tmp.full =3D dfixed_const(64); > + margin.full =3D dfixed_div(margin, tmp); > + result.full =3D dfixed_div(result, margin); > + > + margin.full =3D dfixed_const(1006); > + tmp.full =3D dfixed_const(1000); > + margin.full =3D dfixed_div(margin, tmp); > + result.full =3D dfixed_mul(result, margin); > + > + result.full =3D dfixed_div(result, tmp); > + result.full =3D dfixed_ceil(result); > + res =3D dfixed_trunc(result); > + return res; > +} > +EXPORT_SYMBOL(drm_dp_calc_pbn_mode); > + > +static int test_calc_pbn_mode(void) > +{ > + int ret; > + ret =3D drm_dp_calc_pbn_mode(154000, 30); > + if (ret !=3D 689) > + return -EINVAL; > + ret =3D drm_dp_calc_pbn_mode(234000, 30); > + if (ret !=3D 1047) > + return -EINVAL; > + return 0; > +} > + > +/* we want to kick the TX after we've ack the up/down IRQs. */ > +static void drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr) > +{ > + queue_work(system_long_wq, &mgr->tx_work); > +} > + > +static void drm_dp_mst_dump_mstb(struct seq_file *m, > + struct drm_dp_mst_branch *mstb) > +{ > + struct drm_dp_mst_port *port; > + int tabs =3D mstb->lct; > + char prefix[10]; > + int i; > + > + for (i =3D 0; i < tabs; i++) > + prefix[i] =3D '\t'; > + prefix[i] =3D '\0'; > + > + seq_printf(m, "%smst: %p, %d\n", prefix, mstb, mstb->num_ports); > + list_for_each_entry(port, &mstb->ports, next) { > + seq_printf(m, "%sport: %d: ddps: %d ldps: %d, %p, conn: %p\n",=20 > prefix, port->port_num, port->ddps, port->ldps, port, port->connector); > + if (port->mstb) > + drm_dp_mst_dump_mstb(m, port->mstb); > + } > +} > + > +static bool dump_dp_payload_table(struct drm_dp_mst_topology_mgr *mgr, > + char *buf) > +{ > + int ret; > + int i; > + mutex_lock(&mgr->aux_lock); > + for (i =3D 0; i < 4; i++) { > + ret =3D drm_dp_dpcd_read(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS + (= i=20 > * 16), &buf[i * 16], 16); > + if (ret !=3D 16) > + break; > + } > + mutex_unlock(&mgr->aux_lock); > + if (i =3D=3D 4) > + return true; > + return false; > +} > + > +/** > + * drm_dp_mst_dump_topology(): dump topology to seq file. > + * @m: seq_file to dump output to > + * @mgr: manager to dump current topology for. > + * > + * helper to dump MST topology to a seq file for debugfs. > + */ > +void drm_dp_mst_dump_topology(struct seq_file *m, > + struct drm_dp_mst_topology_mgr *mgr) > +{ > + int i; > + struct drm_dp_mst_port *port; > + mutex_lock(&mgr->lock); > + if (mgr->mst_primary) > + drm_dp_mst_dump_mstb(m, mgr->mst_primary); > + > + /* dump VCPIs */ > + mutex_unlock(&mgr->lock); > + > + mutex_lock(&mgr->payload_lock); > + seq_printf(m, "vcpi: %lx\n", mgr->payload_mask); > + > + for (i =3D 0; i < mgr->max_payloads; i++) { > + if (mgr->proposed_vcpis[i]) { > + port =3D container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port,= =20 > vcpi); > + seq_printf(m, "vcpi %d: %d %d %d\n", i, port->port_num,=20 > port->vcpi.vcpi, port->vcpi.num_slots); > + } else > + seq_printf(m, "vcpi %d:unsed\n", i); > + } > + for (i =3D 0; i < mgr->max_payloads; i++) { > + seq_printf(m, "payload %d: %d, %d, %d\n", > + i, > + mgr->payloads[i].payload_state, > + mgr->payloads[i].start_slot, > + mgr->payloads[i].num_slots); > + > + > + } > + mutex_unlock(&mgr->payload_lock); > + > + mutex_lock(&mgr->lock); > + if (mgr->mst_primary) { > + u8 buf[64]; > + bool bret; > + int ret; > + ret =3D drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, buf,=20 > DP_RECEIVER_CAP_SIZE); > + seq_printf(m, "dpcd: "); > + for (i =3D 0; i < DP_RECEIVER_CAP_SIZE; i++) > + seq_printf(m, "%02x ", buf[i]); > + seq_printf(m, "\n"); > + ret =3D drm_dp_dpcd_read(mgr->aux, DP_FAUX_CAP, buf, 2); > + seq_printf(m, "faux/mst: "); > + for (i =3D 0; i < 2; i++) > + seq_printf(m, "%02x ", buf[i]); > + seq_printf(m, "\n"); > + ret =3D drm_dp_dpcd_read(mgr->aux, DP_MSTM_CTRL, buf, 1); > + seq_printf(m, "mst ctrl: "); > + for (i =3D 0; i < 1; i++) > + seq_printf(m, "%02x ", buf[i]); > + seq_printf(m, "\n"); > + > + bret =3D dump_dp_payload_table(mgr, buf); > + if (bret =3D=3D true) { > + seq_printf(m, "payload table: "); > + for (i =3D 0; i < 63; i++) > + seq_printf(m, "%02x ", buf[i]); > + seq_printf(m, "\n"); > + } > + > + } > + > + mutex_unlock(&mgr->lock); > + > +} > +EXPORT_SYMBOL(drm_dp_mst_dump_topology); > + > +static void drm_dp_tx_work(struct work_struct *work) > +{ > + struct drm_dp_mst_topology_mgr *mgr =3D container_of(work, struct=20 > drm_dp_mst_topology_mgr, tx_work); > + > + mutex_lock(&mgr->qlock); > + if (mgr->tx_down_in_progress) > + process_single_down_tx_qlock(mgr); > + mutex_unlock(&mgr->qlock); > +} > + > +/** > + * drm_dp_mst_topology_mgr_init - initialise a topology manager > + * @mgr: manager struct to initialise > + * @dev: device providing this structure - for i2c addition. > + * @aux: DP helper aux channel to talk to this device > + * @max_dpcd_transaction_bytes: hw specific DPCD transaction limit > + * @max_payloads: maximum number of payloads this GPU can source > + * @conn_base_id: the connector object ID the MST device is connected = to. > + * > + * Return 0 for success, or negative error code on failure > + */ > +int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr, > + struct device *dev, struct drm_dp_aux *aux, > + int max_dpcd_transaction_bytes, > + int max_payloads, int conn_base_id) > +{ > + mutex_init(&mgr->lock); > + mutex_init(&mgr->qlock); > + mutex_init(&mgr->aux_lock); > + mutex_init(&mgr->payload_lock); > + INIT_LIST_HEAD(&mgr->tx_msg_upq); > + INIT_LIST_HEAD(&mgr->tx_msg_downq); > + INIT_WORK(&mgr->work, drm_dp_mst_link_probe_work); > + INIT_WORK(&mgr->tx_work, drm_dp_tx_work); > + init_waitqueue_head(&mgr->tx_waitq); > + mgr->dev =3D dev; > + mgr->aux =3D aux; > + mgr->max_dpcd_transaction_bytes =3D max_dpcd_transaction_bytes; > + mgr->max_payloads =3D max_payloads; > + mgr->conn_base_id =3D conn_base_id; > + mgr->payloads =3D kcalloc(max_payloads, sizeof(struct drm_dp_payload)= ,=20 > GFP_KERNEL); > + if (!mgr->payloads) > + return -ENOMEM; > + mgr->proposed_vcpis =3D kcalloc(max_payloads, sizeof(struct=20 > drm_dp_vcpi *), GFP_KERNEL); > + if (!mgr->proposed_vcpis) > + return -ENOMEM; > + set_bit(0, &mgr->payload_mask); > + test_calc_pbn_mode(); > + return 0; > +} > +EXPORT_SYMBOL(drm_dp_mst_topology_mgr_init); > + > +/** > + * drm_dp_mst_topology_mgr_destroy() - destroy topology manager. > + * @mgr: manager to destroy > + */ > +void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *m= gr) > +{ > + mutex_lock(&mgr->payload_lock); > + kfree(mgr->payloads); > + mgr->payloads =3D NULL; > + kfree(mgr->proposed_vcpis); > + mgr->proposed_vcpis =3D NULL; > + mutex_unlock(&mgr->payload_lock); > + mgr->dev =3D NULL; > + mgr->aux =3D NULL; > +} > +EXPORT_SYMBOL(drm_dp_mst_topology_mgr_destroy); > + > +/* I2C device */ > +static int drm_dp_mst_i2c_xfer(struct i2c_adapter *adapter, struct=20 > i2c_msg *msgs, > + int num) > +{ > + struct drm_dp_aux *aux =3D adapter->algo_data; > + struct drm_dp_mst_port *port =3D container_of(aux, struct=20 > drm_dp_mst_port, aux); > + struct drm_dp_mst_branch *mstb; > + struct drm_dp_mst_topology_mgr *mgr =3D port->mgr; > + unsigned int i; > + bool reading =3D false; > + struct drm_dp_sideband_msg_req_body msg; > + struct drm_dp_sideband_msg_tx *txmsg =3D NULL; > + int ret; > + > + mstb =3D drm_dp_get_validated_mstb_ref(mgr, port->parent); > + if (!mstb) > + return -EREMOTEIO; > + > + /* construct i2c msg */ > + /* see if last msg is a read */ > + if (msgs[num - 1].flags & I2C_M_RD) > + reading =3D true; > + > + if (!reading) { > + DRM_DEBUG_KMS("Unsupported I2C transaction for MST device\n"); > + ret =3D -EIO; > + goto out; > + } > + > + msg.req_type =3D DP_REMOTE_I2C_READ; > + msg.u.i2c_read.num_transactions =3D num - 1; > + msg.u.i2c_read.port_number =3D port->port_num; > + for (i =3D 0; i < num - 1; i++) { > + msg.u.i2c_read.transactions[i].i2c_dev_id =3D msgs[i].addr; > + msg.u.i2c_read.transactions[i].num_bytes =3D msgs[i].len; > + memcpy(&msg.u.i2c_read.transactions[i].bytes, msgs[i].buf, msgs[i].le= n); > + } > + msg.u.i2c_read.read_i2c_device_id =3D msgs[num - 1].addr; > + msg.u.i2c_read.num_bytes_read =3D msgs[num - 1].len; > + > + txmsg =3D kzalloc(sizeof(*txmsg), GFP_KERNEL); > + if (!txmsg) { > + ret =3D -ENOMEM; > + goto out; > + } > + > + txmsg->dst =3D mstb; > + drm_dp_encode_sideband_req(&msg, txmsg); > + > + drm_dp_queue_down_tx(mgr, txmsg); > + > + ret =3D drm_dp_mst_wait_tx_reply(mstb, txmsg); > + if (ret > 0) { > + > + if (txmsg->reply.reply_type =3D=3D 1) { /* got a NAK back */ > + ret =3D -EREMOTEIO; > + goto out; > + } > + if (txmsg->reply.u.remote_i2c_read_ack.num_bytes !=3D msgs[num - 1].l= en) { > + ret =3D -EIO; > + goto out; > + } > + memcpy(msgs[num - 1].buf, txmsg->reply.u.remote_i2c_read_ack.bytes,=20 > msgs[num - 1].len); > + ret =3D num; > + } > +out: > + kfree(txmsg); > + drm_dp_put_mst_branch_device(mstb); > + return ret; > +} > + > +static u32 drm_dp_mst_i2c_functionality(struct i2c_adapter *adapter) > +{ > + return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL | > + I2C_FUNC_SMBUS_READ_BLOCK_DATA | > + I2C_FUNC_SMBUS_BLOCK_PROC_CALL | > + I2C_FUNC_10BIT_ADDR; > +} > + > +static const struct i2c_algorithm drm_dp_mst_i2c_algo =3D { > + .functionality =3D drm_dp_mst_i2c_functionality, > + .master_xfer =3D drm_dp_mst_i2c_xfer, > +}; > + > +/** > + * drm_dp_mst_register_i2c_bus() - register an I2C adapter for=20 > I2C-over-AUX > + * @aux: DisplayPort AUX channel > + * > + * Returns 0 on success or a negative error code on failure. > + */ > +static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux) > +{ > + aux->ddc.algo =3D &drm_dp_mst_i2c_algo; > + aux->ddc.algo_data =3D aux; > + aux->ddc.retries =3D 3; > + > + aux->ddc.class =3D I2C_CLASS_DDC; > + aux->ddc.owner =3D THIS_MODULE; > + aux->ddc.dev.parent =3D aux->dev; > + aux->ddc.dev.of_node =3D aux->dev->of_node; > + > + strlcpy(aux->ddc.name, aux->name ? aux->name : dev_name(aux->dev), > + sizeof(aux->ddc.name)); > + > + return i2c_add_adapter(&aux->ddc); > +} > + > +/** > + * drm_dp_mst_unregister_i2c_bus() - unregister an I2C-over-AUX adapte= r > + * @aux: DisplayPort AUX channel > + */ > +static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux) > +{ > + i2c_del_adapter(&aux->ddc); > +} > diff --git a/include/drm/drm_dp_mst_helper.h=20 > b/include/drm/drm_dp_mst_helper.h > new file mode 100644 > index 0000000..6626d1b > --- /dev/null > +++ b/include/drm/drm_dp_mst_helper.h > @@ -0,0 +1,507 @@ > +/* > + * Copyright =C2=A9 2014 Red Hat. > + * > + * Permission to use, copy, modify, distribute, and sell this=20 > software and its > + * documentation for any purpose is hereby granted without fee,=20 > provided that > + * the above copyright notice appear in all copies and that both that=20 > copyright > + * notice and this permission notice appear in supporting=20 > documentation, and > + * that the name of the copyright holders not be used in advertising o= r > + * publicity pertaining to distribution of the software without specif= ic, > + * written prior permission. The copyright holders make no=20 > representations > + * about the suitability of this software for any purpose. It is=20 > provided "as > + * is" without express or implied warranty. > + * > + * THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS=20 > SOFTWARE, > + * INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN= NO > + * EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL,=20 > INDIRECT OR > + * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM=20 > LOSS OF USE, > + * DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OT= HER > + * TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR=20 > PERFORMANCE > + * OF THIS SOFTWARE. > + */ > +#ifndef _DRM_DP_MST_HELPER_H_ > +#define _DRM_DP_MST_HELPER_H_ > + > +#include > +#include > + > +struct drm_dp_mst_branch; > + > +/** > + * struct drm_dp_vcpi - Virtual Channel Payload Identifer > + * @vcpi: Virtual channel ID. > + * @pbn: Payload Bandwidth Number for this channel > + * @aligned_pbn: PBN aligned with slot size > + * @num_slots: number of slots for this PBN > + */ > +struct drm_dp_vcpi { > + int vcpi; > + int pbn; > + int aligned_pbn; > + int num_slots; > +}; > + > +/** > + * struct drm_dp_mst_port - MST port > + * @kref: reference count for this port. > + * @guid_valid: for DP 1.2 devices if we have validated the GUID. > + * @guid: guid for DP 1.2 device on this port. > + * @port_num: port number > + * @input: if this port is an input port. > + * @mcs: message capability status - DP 1.2 spec. > + * @ddps: DisplayPort Device Plug Status - DP 1.2 > + * @pdt: Peer Device Type > + * @ldps: Legacy Device Plug Status > + * @dpcd_rev: DPCD revision of device on this port > + * @available_pbn: Available bandwidth for this port. > + * @next: link to next port on this branch device > + * @mstb: branch device attach below this port > + * @aux: i2c aux transport to talk to device connected to this port. > + * @parent: branch device parent of this port > + * @vcpi: Virtual Channel Payload info for this port. > + * @connector: DRM connector this port is connected to. > + * @mgr: topology manager this port lives under. > + * > + * This structure represents an MST port endpoint on a device somewher= e > + * in the MST topology. > + */ > +struct drm_dp_mst_port { > + struct kref kref; > + > + /* if dpcd 1.2 device is on this port - its GUID info */ > + bool guid_valid; > + u8 guid[16]; > + > + u8 port_num; > + bool input; > + bool mcs; > + bool ddps; > + u8 pdt; > + bool ldps; > + u8 dpcd_rev; > + uint16_t available_pbn; > + struct list_head next; > + struct drm_dp_mst_branch *mstb; /* pointer to an mstb if this port=20 > has one */ > + struct drm_dp_aux aux; /* i2c bus for this port? */ > + struct drm_dp_mst_branch *parent; > + > + struct drm_dp_vcpi vcpi; > + struct drm_connector *connector; > + struct drm_dp_mst_topology_mgr *mgr; > +}; > + > +/** > + * struct drm_dp_mst_branch - MST branch device. > + * @kref: reference count for this port. > + * @rad: Relative Address to talk to this branch device. > + * @lct: Link count total to talk to this branch device. > + * @num_ports: number of ports on the branch. > + * @msg_slots: one bit per transmitted msg slot. > + * @ports: linked list of ports on this branch. > + * @port_parent: pointer to the port parent, NULL if toplevel. > + * @mgr: topology manager for this branch device. > + * @tx_slots: transmission slots for this device. > + * @last_seqno: last sequence number used to talk to this. > + * @link_address_sent: if a link address message has been sent to=20 > this device yet. > + * > + * This structure represents an MST branch device, there is one > + * primary branch device at the root, along with any others connected > + * to downstream ports > + */ > +struct drm_dp_mst_branch { > + struct kref kref; > + u8 rad[8]; > + u8 lct; > + int num_ports; > + > + int msg_slots; > + struct list_head ports; > + > + /* list of tx ops queue for this port */ > + struct drm_dp_mst_port *port_parent; > + struct drm_dp_mst_topology_mgr *mgr; > + > + /* slots are protected by mstb->mgr->qlock */ > + struct drm_dp_sideband_msg_tx *tx_slots[2]; > + int last_seqno; > + bool link_address_sent; > +}; > + > + > +/* sideband msg header - not bit struct */ > +struct drm_dp_sideband_msg_hdr { > + u8 lct; > + u8 lcr; > + u8 rad[8]; > + bool broadcast; > + bool path_msg; > + u8 msg_len; > + bool somt; > + bool eomt; > + bool seqno; > +}; > + > +struct drm_dp_nak_reply { > + u8 guid[16]; > + u8 reason; > + u8 nak_data; > +}; > + > +struct drm_dp_link_address_ack_reply { > + u8 guid[16]; > + u8 nports; > + struct drm_dp_link_addr_reply_port { > + bool input_port; > + u8 peer_device_type; > + u8 port_number; > + bool mcs; > + bool ddps; > + bool legacy_device_plug_status; > + u8 dpcd_revision; > + u8 peer_guid[16]; > + bool num_sdp_streams; > + bool num_sdp_stream_sinks; > + } ports[16]; > +}; > + > +struct drm_dp_remote_dpcd_read_ack_reply { > + u8 port_number; > + u8 num_bytes; > + u8 bytes[255]; > +}; > + > +struct drm_dp_remote_dpcd_write_ack_reply { > + u8 port_number; > +}; > + > +struct drm_dp_remote_dpcd_write_nak_reply { > + u8 port_number; > + u8 reason; > + u8 bytes_written_before_failure; > +}; > + > +struct drm_dp_remote_i2c_read_ack_reply { > + u8 port_number; > + u8 num_bytes; > + u8 bytes[255]; > +}; > + > +struct drm_dp_remote_i2c_read_nak_reply { > + u8 port_number; > + u8 nak_reason; > + u8 i2c_nak_transaction; > +}; > + > +struct drm_dp_remote_i2c_write_ack_reply { > + u8 port_number; > +}; > + > + > +struct drm_dp_sideband_msg_rx { > + u8 chunk[48]; > + u8 msg[256]; > + u8 curchunk_len; > + u8 curchunk_idx; /* chunk we are parsing now */ > + u8 curchunk_hdrlen; > + u8 curlen; /* total length of the msg */ > + bool have_somt; > + bool have_eomt; > + struct drm_dp_sideband_msg_hdr initial_hdr; > +}; > + > + > +struct drm_dp_allocate_payload { > + u8 port_number; > + u8 number_sdp_streams; > + u8 vcpi; > + u16 pbn; > + u8 sdp_stream_sink[8]; > +}; > + > +struct drm_dp_allocate_payload_ack_reply { > + u8 port_number; > + u8 vcpi; > + u16 allocated_pbn; > +}; > + > +struct drm_dp_connection_status_notify { > + u8 guid[16]; > + u8 port_number; > + bool legacy_device_plug_status; > + bool displayport_device_plug_status; > + bool message_capability_status; > + bool input_port; > + u8 peer_device_type; > +}; > + > +struct drm_dp_remote_dpcd_read { > + u8 port_number; > + u32 dpcd_address; > + u8 num_bytes; > +}; > + > +struct drm_dp_remote_dpcd_write { > + u8 port_number; > + u32 dpcd_address; > + u8 num_bytes; > + u8 bytes[255]; > +}; > + > +struct drm_dp_remote_i2c_read { > + u8 num_transactions; > + u8 port_number; > + struct { > + u8 i2c_dev_id; > + u8 num_bytes; > + u8 bytes[255]; > + u8 no_stop_bit; > + u8 i2c_transaction_delay; > + } transactions[4]; > + u8 read_i2c_device_id; > + u8 num_bytes_read; > +}; > + > +struct drm_dp_remote_i2c_write { > + u8 port_number; > + u8 write_i2c_device_id; > + u8 num_bytes; > + u8 bytes[255]; > +}; > + > +/* this covers ENUM_RESOURCES, POWER_DOWN_PHY, POWER_UP_PHY */ > +struct drm_dp_port_number_req { > + u8 port_number; > +}; > + > +struct drm_dp_enum_path_resources_ack_reply { > + u8 port_number; > + u16 full_payload_bw_number; > + u16 avail_payload_bw_number; > +}; > + > +/* covers POWER_DOWN_PHY, POWER_UP_PHY */ > +struct drm_dp_port_number_rep { > + u8 port_number; > +}; > + > +struct drm_dp_query_payload { > + u8 port_number; > + u8 vcpi; > +}; > + > +struct drm_dp_resource_status_notify { > + u8 port_number; > + u8 guid[16]; > + u16 available_pbn; > +}; > + > +struct drm_dp_query_payload_ack_reply { > + u8 port_number; > + u8 allocated_pbn; > +}; > + > +struct drm_dp_sideband_msg_req_body { > + u8 req_type; > + union ack_req { > + struct drm_dp_connection_status_notify conn_stat; > + struct drm_dp_port_number_req port_num; > + struct drm_dp_resource_status_notify resource_stat; > + > + struct drm_dp_query_payload query_payload; > + struct drm_dp_allocate_payload allocate_payload; > + > + struct drm_dp_remote_dpcd_read dpcd_read; > + struct drm_dp_remote_dpcd_write dpcd_write; > + > + struct drm_dp_remote_i2c_read i2c_read; > + struct drm_dp_remote_i2c_write i2c_write; > + } u; > +}; > + > +struct drm_dp_sideband_msg_reply_body { > + u8 reply_type; > + u8 req_type; > + union ack_replies { > + struct drm_dp_nak_reply nak; > + struct drm_dp_link_address_ack_reply link_addr; > + struct drm_dp_port_number_rep port_number; > + > + struct drm_dp_enum_path_resources_ack_reply path_resources; > + struct drm_dp_allocate_payload_ack_reply allocate_payload; > + struct drm_dp_query_payload_ack_reply query_payload; > + > + struct drm_dp_remote_dpcd_read_ack_reply remote_dpcd_read_ack; > + struct drm_dp_remote_dpcd_write_ack_reply remote_dpcd_write_ack; > + struct drm_dp_remote_dpcd_write_nak_reply remote_dpcd_write_nack; > + > + struct drm_dp_remote_i2c_read_ack_reply remote_i2c_read_ack; > + struct drm_dp_remote_i2c_read_nak_reply remote_i2c_read_nack; > + struct drm_dp_remote_i2c_write_ack_reply remote_i2c_write_ack; > + } u; > +}; > + > +/* msg is queued to be put into a slot */ > +#define DRM_DP_SIDEBAND_TX_QUEUED 0 > +/* msg has started transmitting on a slot - still on msgq */ > +#define DRM_DP_SIDEBAND_TX_START_SEND 1 > +/* msg has finished transmitting on a slot - removed from msgq only=20 > in slot */ > +#define DRM_DP_SIDEBAND_TX_SENT 2 > +/* msg has received a response - removed from slot */ > +#define DRM_DP_SIDEBAND_TX_RX 3 > +#define DRM_DP_SIDEBAND_TX_TIMEOUT 4 > + > +struct drm_dp_sideband_msg_tx { > + u8 msg[256]; > + u8 chunk[48]; > + u8 cur_offset; > + u8 cur_len; > + struct drm_dp_mst_branch *dst; > + struct list_head next; > + int seqno; > + int state; > + bool path_msg; > + struct drm_dp_sideband_msg_reply_body reply; > +}; > + > +/* sideband msg handler */ > +struct drm_dp_mst_topology_mgr; > +struct drm_dp_mst_topology_cbs { > + /* create a connector for a port */ > + struct drm_connector *(*add_connector)(struct=20 > drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port, char *path)= ; > + void (*destroy_connector)(struct drm_dp_mst_topology_mgr *mgr, > + struct drm_connector *connector); > + void (*hotplug)(struct drm_dp_mst_topology_mgr *mgr); > + > +}; > + > +#define DP_MAX_PAYLOAD (sizeof(unsigned long) * 8) > + > +#define DP_PAYLOAD_LOCAL 1 > +#define DP_PAYLOAD_REMOTE 2 > +#define DP_PAYLOAD_DELETE_LOCAL 3 > + > +struct drm_dp_payload { > + int payload_state; > + int start_slot; > + int num_slots; > +}; > + > +/** > + * struct drm_dp_mst_topology_mgr - DisplayPort MST manager > + * @dev: device pointer for adding i2c devices etc. > + * @cbs: callbacks for connector addition and destruction. > + * @max_dpcd_transaction_bytes - maximum number of bytes to=20 > read/write in one go. > + * @aux: aux channel for the DP connector. > + * @max_payloads: maximum number of payloads the GPU can generate. > + * @conn_base_id: DRM connector ID this mgr is connected to. > + * @down_rep_recv: msg receiver state for down replies. > + * @up_req_recv: msg receiver state for up requests. > + * @lock: protects mst state, primary, guid, dpcd. > + * @aux_lock: protects aux channel. > + * @mst_state: if this manager is enabled for an MST capable port. > + * @mst_primary: pointer to the primary branch device. > + * @guid_valid: GUID valid for the primary branch device. > + * @guid: GUID for primary port. > + * @dpcd: cache of DPCD for primary port. > + * @pbn_div: PBN to slots divisor. > + * > + * This struct represents the toplevel displayport MST topology manage= r. > + * There should be one instance of this for every MST capable DP=20 > connector > + * on the GPU. > + */ > +struct drm_dp_mst_topology_mgr { > + > + struct device *dev; > + struct drm_dp_mst_topology_cbs *cbs; > + int max_dpcd_transaction_bytes; > + struct drm_dp_aux *aux; /* auxch for this topology mgr to use */ > + int max_payloads; > + int conn_base_id; > + > + /* only ever accessed from the workqueue - which should be serialised= */ > + struct drm_dp_sideband_msg_rx down_rep_recv; > + struct drm_dp_sideband_msg_rx up_req_recv; > + > + /* pointer to info about the initial MST device */ > + struct mutex lock; /* protects mst_state + primary + guid + dpcd */ > + > + struct mutex aux_lock; /* protect access to the AUX */ > + bool mst_state; > + struct drm_dp_mst_branch *mst_primary; > + /* primary MST device GUID */ > + bool guid_valid; > + u8 guid[16]; > + u8 dpcd[DP_RECEIVER_CAP_SIZE]; > + u8 sink_count; > + int pbn_div; > + int total_slots; > + int avail_slots; > + int total_pbn; > + > + /* messages to be transmitted */ > + /* qlock protects the upq/downq and in_progress, > + the mstb tx_slots and txmsg->state once they are queued */ > + struct mutex qlock; > + struct list_head tx_msg_downq; > + struct list_head tx_msg_upq; > + bool tx_down_in_progress; > + bool tx_up_in_progress; > + > + /* payload info + lock for it */ > + struct mutex payload_lock; > + struct drm_dp_vcpi **proposed_vcpis; > + struct drm_dp_payload *payloads; > + unsigned long payload_mask; > + > + wait_queue_head_t tx_waitq; > + struct work_struct work; > + > + struct work_struct tx_work; > +}; > + > +int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,=20 > struct device *dev, struct drm_dp_aux *aux, int=20 > max_dpcd_transaction_bytes, int max_payloads, int conn_base_id); > + > +void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr=20 > *mgr); > + > + > +int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr=20 > *mgr, bool mst_state); > + > + > +int drm_dp_mst_hpd_irq(struct drm_dp_mst_topology_mgr *mgr, u8 *esi,=20 > bool *handled); > + > + > +enum drm_connector_status drm_dp_mst_detect_port(struct=20 > drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port); > + > +struct edid *drm_dp_mst_get_edid(struct drm_connector *connector,=20 > struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port); > + > + > +int drm_dp_calc_pbn_mode(int clock, int bpp); > + > + > +bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,=20 > struct drm_dp_mst_port *port, int pbn, int *slots); > + > + > +void drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr,=20 > struct drm_dp_mst_port *port); > + > + > +void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, > + struct drm_dp_mst_port *port); > + > + > +int drm_dp_find_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, > + int pbn); > + > + > +int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr); > + > + > +int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr); > + > +int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr); > + > +void drm_dp_mst_dump_topology(struct seq_file *m, > + struct drm_dp_mst_topology_mgr *mgr); > + > +void drm_dp_mst_topology_mgr_suspend(struct drm_dp_mst_topology_mgr=20 > *mgr); > +int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr= ); > +#endif > Dave Airlie > Tuesday, May 20, 2014 7:54 PM > Hey, > > So this set is pretty close to what I think we should be merging=20 > initially, > > Since the last set, it makes fbcon and suspend/resume work a lot better= , > > I've also fixed a couple of bugs in -intel that make things work a lot > better. > > I've bashed on this a bit using kms-flip from intel-gpu-tools, hacked > to add 3 monitor support. > > It still generates a fair few i915 state checker backtraces, and some > of them are fairly hard to work out, it might be we should just tone > down the state checker for encoders/connectors with no actual hw backin= g > them. > > Dave. > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/intel-gfx --=20 Sent using Postbox: http://www.getpostbox.com --------------070900050502070101020808 Content-Type: multipart/related; boundary="------------010607080907070602060102" --------------010607080907070602060102 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
This patch is a monster, but that's to be expected with MST, I suppose.=20 :) It has some formatting issues (lines over 80 characters in length)=20 but that can be cleaned up later (as far as I'm concerned). Otherwise I=20 don't see anything glaring here, so...

Reviewed-by: Todd Previte <tprevite@gmail.com>
=C2=A0

Dave Airlie = =20 Tuesday, May 2= 0,=20 2014 7:55 PM
From: Dave Airlie=20 <= ;airlied@redhat.com>

This is the initial import of the=20 helper for displayport multistream.

It consists of a topology=20 manager, init/destroy/set mst state

It supports DP 1.2 MST=20 sideband msg protocol handler - via hpd irqs

connector detect and edid retrieval interface.

It supports i2c device over DP 1.2=20 sideband msg protocol (EDID reads only)

bandwidth manager API via vcpi allocation and payload updating,
along with a helper to check=20 the ACT status.

Objects:
MST topology manager - one per=20 toplevel MST capable GPU port - not sure if this should be higher level=20 again
MST branch unit - one instance per plugged branching unit - one at top of hierarchy - others hanging from ports
MST port - one port=20 per port reported by branching units, can have MST units hanging from=20 them as well.

Changes since initial posting:
a) add a mutex=20 responsbile for the queues, it locks the sideband and msg slots, and=20 msgs to transmit state
b) add worker to handle connection state=20 change events, for MST device chaining and hotplug
c) add a payload=20 spinlock
d) add path sideband msg support
e) fixup enum path=20 resources transmit
f) reduce max dpcd msg to 16, as per DP1.2 spec.g) separate tx queue kicking from irq processing and move irq acking back=20 to drivers.

Changes since v0.2:
a) reorganise code,
b) drop ACT forcing code
c) add connector naming interface using path=20 property
d) add topology dumper helper
e) proper reference=20 counting and lookup for ports and mstbs.
f) move tx kicking into a=20 workq
g) add aux locking - this should be redone
h) split teardown into two parts
i) start working on documentation on interface.
Changes since v0.3:
a) vc payload locking and tracking fixes
b) add=20 hotplug callback into driver - replaces crazy return 1 scheme
c)=20 txmsg + mst branch device refcount fixes
d) don't bail on mst=20 shutdown if device is gone
e) change irq handler to take all 4 bytes=20 of SINK_COUNT + ESI vectors
f) make DP payload updates timeout longer - observed on docking station redock
g) add more info to debugfs=20 dumper

Changes since v0.4:
a) suspend/resume support
b)=20 more debugging in debugfs

TODO:
misc features

Signed-off= -by: Dave Airlie <airlied@redhat.com>
---
=20 Documentation/DocBook/drm.tmpl | 6 +
=20 drivers/gpu/drm/Makefile | 2 +-
=20 drivers/gpu/drm/drm_dp_mst_topology.c | 2739=20 +++++++++++++++++++++++++++++++++
include/drm/drm_dp_mst_helper.h =20 | 507 ++++++
4 files changed, 3253 insertions(+), 1 deletion(-)<= br> create mode 100644 drivers/gpu/drm/drm_dp_mst_topology.c
create=20 mode 100644 include/drm/drm_dp_mst_helper.h

diff --git=20 a/Documentation/DocBook/drm.tmpl b/Documentation/DocBook/drm.tmpl
inde= x 83dd0b0..1883976 100644
--- a/Documentation/DocBook/drm.tmpl
+++=20 b/Documentation/DocBook/drm.tmpl
@@ -2296,6 +2296,12 @@ void=20 intel_crt_init(struct drm_device *dev)
=20 !Edrivers/gpu/drm/drm_dp_helper.c
</sect2>
=20 <sect2>
+ <title>Display Port MST Helper Functions=20 Reference</title>
+!Pdrivers/gpu/drm/drm_dp_mst_topology.c dp=20 mst helper
+!Iinclude/drm/drm_dp_mst_helper.h
+!Edrivers/gpu/drm/dr= m_dp_mst_topology.c
+ </sect2>
+ <sect2>
<title>EDID=20 Helper Functions Reference</title>
=20 !Edrivers/gpu/drm/drm_edid.c
</sect2>
diff --git=20 a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index=20 48e38ba..712b73e 100644
--- a/drivers/gpu/drm/Makefile
+++=20 b/drivers/gpu/drm/Makefile
@@ -23,7 +23,7 @@ drm-$(CONFIG_DRM_PANEL)=20 +=3D drm_panel.o

drm-usb-y :=3D drm_usb.o

-drm_kms_hel= per-y :=3D drm_crtc_helper.o drm_dp_helper.o drm_probe_helper.o
+drm_kms_he= lper-y :=3D drm_crtc_helper.o drm_dp_helper.o drm_probe_helper.o=20 drm_dp_mst_topology.o
=20 drm_kms_helper-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) +=3D drm_edid_load.o
=20 drm_kms_helper-$(CONFIG_DRM_KMS_FB_HELPER) +=3D drm_fb_helper.o
=20 drm_kms_helper-$(CONFIG_DRM_KMS_CMA_HELPER) +=3D drm_fb_cma_helper.o
d= iff --git a/drivers/gpu/drm/drm_dp_mst_topology.c=20 b/drivers/gpu/drm/drm_dp_mst_topology.c
new file mode 100644
index 0000000..ebd9292
--- /dev/null
+++=20 b/drivers/gpu/drm/drm_dp_mst_topology.c
@@ -0,0 +1,2739 @@
+/*
+ * Copyright =C2=A9 2014 Red Hat
+ *
+ * Permission to use, copy,=20 modify, distribute, and sell this software and its
+ * documentation=20 for any purpose is hereby granted without fee, provided that
+ * the=20 above copyright notice appear in all copies and that both that copyright<= br>+ * notice and this permission notice appear in supporting documentation, and
+ * that the name of the copyright holders not be used in=20 advertising or
+ * publicity pertaining to distribution of the=20 software without specific,
+ * written prior permission. The=20 copyright holders make no representations
+ * about the suitability=20 of this software for any purpose. It is provided "as
+ * is" without express or implied warranty.
+ *
+ * THE COPYRIGHT HOLDERS=20 DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
+ * INCLUDING=20 ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO
+ *=20 EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR<= br>+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS=20 OF USE,
+ * DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,=20 NEGLIGENCE OR OTHER
+ * TORTIOUS ACTION, ARISING OUT OF OR IN=20 CONNECTION WITH THE USE OR PERFORMANCE
+ * OF THIS SOFTWARE.
+ */+
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include=20 <linux/init.h>
+#include <linux/errno.h>
+#include=20 <linux/sched.h>
+#include <linux/i2c.h>
+#include=20 <drm/drm_dp_mst_helper.h>
+#include <drm/drmP.h>
+
+= #include <drm/drm_fixed.h>
+
+/**
+ * DOC: dp mst helper
+ *+ * These functions contain parts of the DisplayPort 1.2a MultiStream=20 Transport
+ * protocol. The helpers contain a topology manager and=20 bandwidth manager.
+ * The helpers encapsulate the sending and=20 received of sideband msgs.
+ */
+static bool=20 dump_dp_payload_table(struct drm_dp_mst_topology_mgr *mgr,
+ =20 char *buf);
+static int test_calc_pbn_mode(void);
+
+static=20 void drm_dp_put_port(struct drm_dp_mst_port *port);
+
+static int=20 drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *mgr,
+ =20 int id,
+ struct drm_dp_payload *payload);
+
+static int drm_dp_send_dpcd_write(struct drm_dp_mst_topology_mgr *mgr,
+ =09 struct drm_dp_mst_port *port,
+ int offset, int size, u8=20 *bytes);
+
+static int drm_dp_send_link_address(struct=20 drm_dp_mst_topology_mgr *mgr,
+ struct drm_dp_mst_branch=20 *mstb);
+static int drm_dp_send_enum_path_resources(struct=20 drm_dp_mst_topology_mgr *mgr,
+ struct drm_dp_mst_branch=20 *mstb,
+ struct drm_dp_mst_port *port);
+static bool=20 drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr,
+ u8=20 *guid);
+
+static int drm_dp_mst_register_i2c_bus(struct=20 drm_dp_aux *aux);
+static void drm_dp_mst_unregister_i2c_bus(struct=20 drm_dp_aux *aux);
+static void drm_dp_mst_kick_tx(struct=20 drm_dp_mst_topology_mgr *mgr);
+/* sideband msg handling */
+static u8 drm_dp_msg_header_crc4(const uint8_t *data, size_t num_nibbles)
+{=
+ u8 bitmask =3D 0x80;
+ u8 bitshift =3D 7;
+ u8 array_index =3D 0;<= br>+=09 int number_of_bits =3D num_nibbles * 4;
+ u8 remainder =3D 0;
+
= +=09 while (number_of_bits !=3D 0) {
+ number_of_bits--;
+ remainder=20 <<=3D 1;
+ remainder |=3D (data[array_index] & bitmask)=20 >> bitshift;
+ bitmask >>=3D 1;
+ bitshift--;
+ i= f (bitmask =3D=3D 0) {
+ bitmask =3D 0x80;
+ bitshift =3D 7;
= + =09 array_index++;
+ }
+ if ((remainder & 0x10) =3D=3D 0x10)
+= =09 remainder ^=3D 0x13;
+ }
+
+ number_of_bits =3D 4;
+ while=20 (number_of_bits !=3D 0) {
+ number_of_bits--;
+ remainder=20 <<=3D 1;
+ if ((remainder & 0x10) !=3D 0)
+ remainder = ^=3D=20 0x13;
+ }
+
+ return remainder;
+}
+
+static u8=20 drm_dp_msg_data_crc4(const uint8_t *data, u8 number_of_bytes)
+{
+ u8 bitmask =3D 0x80;
+ u8 bitshift =3D 7;
+ u8 array_index =3D 0;<= br>+=09 int number_of_bits =3D number_of_bytes * 8;
+ u16 remainder =3D 0;
= +
+ while (number_of_bits !=3D 0) {
+ number_of_bits--;
+ remainder=20 <<=3D 1;
+ remainder |=3D (data[array_index] & bitmask)=20 >> bitshift;
+ bitmask >>=3D 1;
+ bitshift--;
+ i= f (bitmask =3D=3D 0) {
+ bitmask =3D 0x80;
+ bitshift =3D 7;
= + =09 array_index++;
+ }
+ if ((remainder & 0x100) =3D=3D 0x100)+=09 remainder ^=3D 0xd5;
+ }
+
+ number_of_bits =3D 8;
+ while=20 (number_of_bits !=3D 0) {
+ number_of_bits--;
+ remainder=20 <<=3D 1;
+ if ((remainder & 0x100) !=3D 0)

+ =09 remainder ^=3D 0xd5;
+ }
+
+ return remainder & 0xff;
+}<= br>+static inline u8 drm_dp_calc_sb_hdr_size(struct drm_dp_sideband_msg_hdr *hdr)+{
+ u8 size =3D 3;
+ size +=3D (hdr->lct / 2);
+ return size;
+}=
+
+static void drm_dp_encode_sideband_msg_hdr(struct drm_dp_sideband_msg_hdr=20 *hdr,
+ u8 *buf, int *len)
+{
+ int idx =3D 0;
+ int i= ;
+ u8 crc4;
+ buf[idx++] =3D ((hdr->lct & 0xf) << 4) |=20 (hdr->lcr & 0xf);
+ for (i =3D 0; i < (hdr->lct / 2); i++= )
+ buf[idx++] =3D hdr->rad[i];
+ buf[idx++] =3D (hdr->broadcast=20 << 7) | (hdr->path_msg << 6) |
+ (hdr->msg_len=20 & 0x3f);
+ buf[idx++] =3D (hdr->somt << 7) | (hdr->eom= t << 6) | (hdr->seqno << 4);
+
+ crc4 =3D=20 drm_dp_msg_header_crc4(buf, (idx * 2) - 1);
+ buf[idx - 1] |=3D (crc4=20 & 0xf);
+
+ *len =3D idx;
+}
+
+static bool=20 drm_dp_decode_sideband_msg_hdr(struct drm_dp_sideband_msg_hdr *hdr,
+ u8 *buf, int buflen, u8 *hdrlen)
+{
+ u8 crc4;
+ u8=20 len;
+ int i;
+ u8 idx;
+ if (buf[0] =3D=3D 0)
+ return fals= e;
+ len =3D 3;
+ len +=3D ((buf[0] & 0xf0) >> 4) / 2;
+ if (= len > buflen)
+ return false;
+ crc4 =3D=20 drm_dp_msg_header_crc4(buf, (len * 2) - 1);
+
+ if ((crc4 &=20 0xf) !=3D (buf[len - 1] & 0xf)) {
+ DRM_DEBUG_KMS("crc4 mismatch=20 0x%x 0x%x\n", crc4, buf[len - 1]);
+ return false;
+ }
+
+=09 hdr->lct =3D (buf[0] & 0xf0) >> 4;
+ hdr->lcr =3D (buf= [0] & 0xf);
+ idx =3D 1;
+ for (i =3D 0; i < (hdr->lct / 2);= =20 i++)
+ hdr->rad[i] =3D buf[idx++];
+ hdr->broadcast =3D=20 (buf[idx] >> 7) & 0x1;
+ hdr->path_msg =3D (buf[idx]=20 >> 6) & 0x1;
+ hdr->msg_len =3D buf[idx] & 0x3f;
+= =09 idx++;
+ hdr->somt =3D (buf[idx] >> 7) & 0x1;
+=09 hdr->eomt =3D (buf[idx] >> 6) & 0x1;
+ hdr->seqno =3D=20 (buf[idx] >> 4) & 0x1;
+ idx++;
+ *hdrlen =3D idx;
+=09 return true;
+}
+
+static void=20 drm_dp_encode_sideband_req(struct drm_dp_sideband_msg_req_body *req,
+ struct drm_dp_sideband_msg_tx *raw)
+{
+ int idx =3D 0;<= br>+ int i;
+ u8 *buf =3D raw->msg;
+ buf[idx++] =3D req->req_typ= e=20 & 0x7f;
+
+ switch (req->req_type) {
+ case=20 DP_ENUM_PATH_RESOURCES:
+ buf[idx] =3D (req->u.port_num.port_numbe= r & 0xf) << 4;
+ idx++;
+ break;
+ case=20 DP_ALLOCATE_PAYLOAD:
+ buf[idx] =3D=20 (req->u.allocate_payload.port_number & 0xf) << 4 |
+ =09 (req->u.allocate_payload.number_sdp_streams & 0xf);
+ idx++;+ buf[idx] =3D (req->u.allocate_payload.vcpi & 0x7f);
+ idx++;=
+ buf[idx] =3D (req->u.allocate_payload.pbn >> 8);
+ idx++;<= br>+ buf[idx] =3D (req->u.allocate_payload.pbn & 0xff);
+ idx++;<= br>+ for (i =3D 0; i < req->u.allocate_payload.number_sdp_streams / 2;= =20 i++) {
+ buf[idx] =3D ((req->u.allocate_payload.sdp_stream_sink[i= * 2] & 0xf) << 4) |
+ =09 (req->u.allocate_payload.sdp_stream_sink[i * 2 + 1] & 0xf);
+=09 idx++;
+ }
+ if (req->u.allocate_payload.number_sdp_streams & 1) {
+ i =3D req->u.allocate_payload.number_sdp_streams -=20 1;
+ buf[idx] =3D (req->u.allocate_payload.sdp_stream_sink[i]=20 & 0xf) << 4;
+ idx++;
+ }
+ break;
+ case=20 DP_QUERY_PAYLOAD:
+ buf[idx] =3D (req->u.query_payload.port_number= =20 & 0xf) << 4;
+ idx++;
+ buf[idx] =3D=20 (req->u.query_payload.vcpi & 0x7f);
+ idx++;
+ break;
+ case DP_REMOTE_DPCD_READ:
+ buf[idx] =3D=20 (req->u.dpcd_read.port_number & 0xf) << 4;
+ buf[idx]=20 |=3D ((req->u.dpcd_read.dpcd_address & 0xf0000) >> 16) &= =20 0xf;
+ idx++;
+ buf[idx] =3D (req->u.dpcd_read.dpcd_address=20 & 0xff00) >> 8;
+ idx++;
+ buf[idx] =3D=20 (req->u.dpcd_read.dpcd_address & 0xff);
+ idx++;
+ =09 buf[idx] =3D (req->u.dpcd_read.num_bytes);
+ idx++;
+ break;+
+ case DP_REMOTE_DPCD_WRITE:
+ buf[idx] =3D=20 (req->u.dpcd_write.port_number & 0xf) << 4;
+ buf[idx]=20 |=3D ((req->u.dpcd_write.dpcd_address & 0xf0000) >> 16) &= ; 0xf;
+ idx++;
+ buf[idx] =3D (req->u.dpcd_write.dpcd_address=20 & 0xff00) >> 8;
+ idx++;
+ buf[idx] =3D=20 (req->u.dpcd_write.dpcd_address & 0xff);
+ idx++;
+ =09 buf[idx] =3D (req->u.dpcd_write.num_bytes);
+ idx++;
+ =09 memcpy(&buf[idx], req->u.dpcd_write.bytes,=20 req->u.dpcd_write.num_bytes);
+ idx +=3D=20 req->u.dpcd_write.num_bytes;
+ break;
+ case=20 DP_REMOTE_I2C_READ:
+ buf[idx] =3D (req->u.i2c_read.port_number=20 & 0xf) << 4;
+ buf[idx] |=3D=20 (req->u.i2c_read.num_transactions & 0x3);
+ idx++;
+ for=20 (i =3D 0; i < (req->u.i2c_read.num_transactions & 0x3); i++) {<= br>+ buf[idx] =3D req->u.i2c_read.transactions[i].i2c_dev_id & 0x7f;=

+ idx++;
+ buf[idx] =3D=20 req->u.i2c_read.transactions[i].num_bytes;
+ idx++;
+ =09 memcpy(&buf[idx], req->u.i2c_read.transactions[i].bytes,=20 req->u.i2c_read.transactions[i].num_bytes);
+ idx +=3D=20 req->u.i2c_read.transactions[i].num_bytes;
+
+ buf[idx] =3D=20 (req->u.i2c_read.transactions[i].no_stop_bit & 0x1) << 5;+ buf[idx] |=3D (req->u.i2c_read.transactions[i].i2c_transaction_dela= y & 0xf);
+ idx++;
+ }
+ buf[idx] =3D=20 (req->u.i2c_read.read_i2c_device_id) & 0x7f;
+ idx++;
+ =09 buf[idx] =3D (req->u.i2c_read.num_bytes_read);
+ idx++;
+ =09 break;
+
+ case DP_REMOTE_I2C_WRITE:
+ buf[idx] =3D=20 (req->u.i2c_write.port_number & 0xf) << 4;
+ idx++;
+ buf[idx] =3D (req->u.i2c_write.write_i2c_device_id) & 0x7f;
+= =09 idx++;
+ buf[idx] =3D (req->u.i2c_write.num_bytes);
+ idx++;<= br>+ memcpy(&buf[idx], req->u.i2c_write.bytes,=20 req->u.i2c_write.num_bytes);
+ idx +=3D=20 req->u.i2c_write.num_bytes;
+ break;
+ }
+ raw->cur_len =3D idx;
+}
+
+static void drm_dp_crc_sideband_chunk_req(u8 *msg,=20 u8 len)
+{
+ u8 crc4;
+ crc4 =3D drm_dp_msg_data_crc4(msg, len);=
+ msg[len] =3D crc4;
+}
+
+static void=20 drm_dp_encode_sideband_reply(struct drm_dp_sideband_msg_reply_body *rep,<= br>+ struct drm_dp_sideband_msg_tx *raw)
+{
+ int idx =3D 0;
+=09 u8 *buf =3D raw->msg;
+
+ buf[idx++] =3D (rep->reply_type &am= p;=20 0x1) << 7 | (rep->req_type & 0x7f);
+
+=09 raw->cur_len =3D idx;
+}
+
+/* this adds a chunk of msg to th= e builder to get the final msg */
+static bool=20 drm_dp_sideband_msg_build(struct drm_dp_sideband_msg_rx *msg,
+ =20 u8 *replybuf, u8 replybuflen, bool hdr)
+{
+ int ret;
+ u8=20 crc4;
+
+ if (hdr) {
+ u8 hdrlen;
+ struct=20 drm_dp_sideband_msg_hdr recv_hdr;
+ ret =3D=20 drm_dp_decode_sideband_msg_hdr(&recv_hdr, replybuf, replybuflen,=20 &hdrlen);
+ if (ret =3D=3D false) {
+ =09 print_hex_dump(KERN_DEBUG, "failed hdr", DUMP_PREFIX_NONE, 16, 1,=20 replybuf, replybuflen, false);
+ return false;
+ }
+
+ =09 /* get length contained in this portion */
+ msg->curchunk_len =3D= =20 recv_hdr.msg_len;
+ msg->curchunk_hdrlen =3D hdrlen;
+
+ /*= =20 we have already gotten an somt - don't bother parsing */
+ if=20 (recv_hdr.somt && msg->have_somt)
+ return false;
++ if (recv_hdr.somt) {
+ memcpy(&msg->initial_hdr,=20 &recv_hdr, sizeof(struct drm_dp_sideband_msg_hdr));
+ =09 msg->have_somt =3D true;
+ }
+ if (recv_hdr.eomt)
+ =09 msg->have_eomt =3D true;
+
+ /* copy the bytes for the remainde= r of this header chunk */
+ msg->curchunk_idx =3D=20 min(msg->curchunk_len, (u8)(replybuflen - hdrlen));
+ =09 memcpy(&msg->chunk[0], replybuf + hdrlen, msg->curchunk_idx);+ } else {
+ memcpy(&msg->chunk[msg->curchunk_idx],=20 replybuf, replybuflen);
+ msg->curchunk_idx +=3D replybuflen;
+= }
+
+ if (msg->curchunk_idx >=3D msg->curchunk_len) {
+ /* do CRC= =20 */
+ crc4 =3D drm_dp_msg_data_crc4(msg->chunk, msg->curchunk_le= n - 1);
+ /* copy chunk into bigger msg */
+ =09 memcpy(&msg->msg[msg->curlen], msg->chunk,=20 msg->curchunk_len - 1);
+ msg->curlen +=3D msg->curchunk_len= - 1;
+ }
+ return true;
+}
+
+static bool=20 drm_dp_sideband_parse_link_address(struct drm_dp_sideband_msg_rx *raw,+ struct drm_dp_sideband_msg_reply_body *repmsg)
+{
+=09 int idx =3D 1;
+ int i;
+ memcpy(repmsg->u.link_addr.guid,=20 &raw->msg[idx], 16);
+ idx +=3D 16;
+=09 repmsg->u.link_addr.nports =3D raw->msg[idx] & 0xf;
+ idx++;=
+ if (idx > raw->curlen)
+ goto fail_len;
+ for (i =3D 0; i=20 < repmsg->u.link_addr.nports; i++) {
+ if (raw->msg[idx]=20 & 0x80)
+ repmsg->u.link_addr.ports[i].input_port =3D 1;
+=
+ repmsg->u.link_addr.ports[i].peer_device_type =3D (raw->msg[idx]=20 >> 4) & 0x7;
+ repmsg->u.link_addr.ports[i].port_number =3D (raw->msg[idx] & 0xf);
+
+ idx++;
+ if (idx >=20 raw->curlen)
+ goto fail_len;
+ =09 repmsg->u.link_addr.ports[i].mcs =3D (raw->msg[idx] >> 7)=20 & 0x1;
+ repmsg->u.link_addr.ports[i].ddps =3D=20 (raw->msg[idx] >> 6) & 0x1;
+ if=20 (repmsg->u.link_addr.ports[i].input_port =3D=3D 0)
+ =09 repmsg->u.link_addr.ports[i].legacy_device_plug_status =3D=20 (raw->msg[idx] >> 5) & 0x1;
+ idx++;
+ if (idx > raw->curlen)
+ goto fail_len;
+ if=20 (repmsg->u.link_addr.ports[i].input_port =3D=3D 0) {
+ =09 repmsg->u.link_addr.ports[i].dpcd_revision =3D (raw->msg[idx]);
= + idx++;
+ if (idx > raw->curlen)
+ goto fail_len;
+ memcpy(repmsg->u.link_addr.ports[i].peer_guid,=20 &raw->msg[idx], 16);
+ idx +=3D 16;
+ if (idx >=20 raw->curlen)
+ goto fail_len;
+ =09 repmsg->u.link_addr.ports[i].num_sdp_streams =3D (raw->msg[idx]=20 >> 4) & 0xf;
+ =09 repmsg->u.link_addr.ports[i].num_sdp_stream_sinks =3D (raw->msg[idx= ] & 0xf);
+ idx++;
+
+ }
+ if (idx >=20 raw->curlen)
+ goto fail_len;
+ }
+
+ return true;
+f= ail_len:
+ DRM_DEBUG_KMS("link address reply parse length fail %d %d\n", idx,=20 raw->curlen);
+ return false;
+}
+
+static bool=20 drm_dp_sideband_parse_remote_dpcd_read(struct drm_dp_sideband_msg_rx=20 *raw,
+ struct drm_dp_sideband_msg_reply_body *repmsg)
+{+ int idx =3D 1;
+ repmsg->u.remote_dpcd_read_ack.port_number =3D=20 raw->msg[idx] & 0xf;
+ idx++;
+ if (idx >=20 raw->curlen)
+ goto fail_len;
+=09 repmsg->u.remote_dpcd_read_ack.num_bytes =3D raw->msg[idx];
+ if= =20 (idx > raw->curlen)
+ goto fail_len;
+
+=09 memcpy(repmsg->u.remote_dpcd_read_ack.bytes, &raw->msg[idx],=20 repmsg->u.remote_dpcd_read_ack.num_bytes);
+ return true;
+fail_= len:
+ DRM_DEBUG_KMS("link address reply parse length fail %d %d\n", idx,=20 raw->curlen);
+ return false;
+}
+
+static bool=20 drm_dp_sideband_parse_remote_dpcd_write(struct drm_dp_sideband_msg_rx=20 *raw,
+ struct drm_dp_sideband_msg_reply_body *repmsg)
+= {
+ int idx =3D 1;
+ repmsg->u.remote_dpcd_write_ack.port_number =3D=20 raw->msg[idx] & 0xf;
+ idx++;
+ if (idx >=20 raw->curlen)
+ goto fail_len;
+ return true;
+fail_len:
+ DRM_DEBUG_KMS("parse length fail %d %d\n", idx, raw->curlen);
+=09 return false;
+}
+
+static bool=20 drm_dp_sideband_parse_remote_i2c_read_ack(struct drm_dp_sideband_msg_rx=20 *raw,
+ struct drm_dp_sideband_msg_reply_body *repmsg)
+= {
+ int idx =3D 1;
+
+ repmsg->u.remote_i2c_read_ack.port_number =3D= =20 (raw->msg[idx] & 0xf);
+ idx++;
+ if (idx >=20 raw->curlen)
+ goto fail_len;
+=09 repmsg->u.remote_i2c_read_ack.num_bytes =3D raw->msg[idx];
+=09 idx++;
+ /* TODO check */
+=09 memcpy(repmsg->u.remote_i2c_read_ack.bytes, &raw->msg[idx],=20 repmsg->u.remote_i2c_read_ack.num_bytes);
+ return true;
+fail_l= en:
+ DRM_DEBUG_KMS("remote i2c reply parse length fail %d %d\n", idx,=20 raw->curlen);
+ return false;
+}
+
+static bool=20 drm_dp_sideband_parse_enum_path_resources_ack(struct=20 drm_dp_sideband_msg_rx *raw,
+ struct=20 drm_dp_sideband_msg_reply_body *repmsg)
+{
+ int idx =3D 1;
+=09 repmsg->u.path_resources.port_number =3D (raw->msg[idx] >> 4)= =20 & 0xf;
+ idx++;
+ if (idx > raw->curlen)
+ goto=20 fail_len;
+ repmsg->u.path_resources.full_payload_bw_number =3D=20 (raw->msg[idx] << 8) | (raw->msg[idx+1]);
+ idx +=3D 2;+ if (idx > raw->curlen)
+ goto fail_len;
+=09 repmsg->u.path_resources.avail_payload_bw_number =3D (raw->msg[idx]= =20 << 8) | (raw->msg[idx+1]);
+ idx +=3D 2;
+ if (idx >=20 raw->curlen)
+ goto fail_len;
+ return true;
+fail_len:
+ DRM_DEBUG_KMS("enum resource parse length fail %d %d\n", idx,=20 raw->curlen);
+ return false;
+}
+
+static bool=20 drm_dp_sideband_parse_allocate_payload_ack(struct drm_dp_sideband_msg_rx *raw,
+ struct drm_dp_sideband_msg_reply_body *repmsg)
+{<= br>+ int idx =3D 1;
+ repmsg->u.allocate_payload.port_number =3D=20 (raw->msg[idx] >> 4) & 0xf;
+ idx++;
+ if (idx >=20 raw->curlen)
+ goto fail_len;
+=09 repmsg->u.allocate_payload.vcpi =3D raw->msg[idx];
+ idx++;
+= =09 if (idx > raw->curlen)
+ goto fail_len;
+=09 repmsg->u.allocate_payload.allocated_pbn =3D (raw->msg[idx] <<= ; 8) | (raw->msg[idx+1]);
+ idx +=3D 2;
+ if (idx >=20 raw->curlen)
+ goto fail_len;
+ return true;
+fail_len:
+ DRM_DEBUG_KMS("allocate payload parse length fail %d %d\n", idx,=20 raw->curlen);
+ return false;
+}
+
+static bool=20 drm_dp_sideband_parse_query_payload_ack(struct drm_dp_sideband_msg_rx=20 *raw,
+ struct drm_dp_sideband_msg_reply_body *repmsg)
+{<= br>+ int idx =3D 1;
+ repmsg->u.query_payload.port_number =3D=20 (raw->msg[idx] >> 4) & 0xf;
+ idx++;
+ if (idx >=20 raw->curlen)
+ goto fail_len;
+=09 repmsg->u.query_payload.allocated_pbn =3D (raw->msg[idx] << 8= ) | (raw->msg[idx + 1]);
+ idx +=3D 2;
+ if (idx >=20 raw->curlen)
+ goto fail_len;
+ return true;
+fail_len:
+ DRM_DEBUG_KMS("query payload parse length fail %d %d\n", idx,=20 raw->curlen);
+ return false;
+}
+
+static bool=20 drm_dp_sideband_parse_reply(struct drm_dp_sideband_msg_rx *raw,
+ =09 struct drm_dp_sideband_msg_reply_body *msg)
+{
+ memset(msg, 0,=20 sizeof(*msg));
+ msg->reply_type =3D (raw->msg[0] & 0x80)=20 >> 7;
+ msg->req_type =3D (raw->msg[0] & 0x7f);
++ if (msg->reply_type) {
+ memcpy(msg->u.nak.guid,=20 &raw->msg[1], 16);
+ msg->u.nak.reason =3D raw->msg[17];=
+ msg->u.nak.nak_data =3D raw->msg[18];
+ return false;
+ }<= br>+
+ switch (msg->req_type) {
+ case DP_LINK_ADDRESS:
+ return=20 drm_dp_sideband_parse_link_address(raw, msg);
+ case=20 DP_QUERY_PAYLOAD:
+ return=20 drm_dp_sideband_parse_query_payload_ack(raw, msg);
+ case=20 DP_REMOTE_DPCD_READ:
+ return=20 drm_dp_sideband_parse_remote_dpcd_read(raw, msg);
+ case=20 DP_REMOTE_DPCD_WRITE:
+ return=20 drm_dp_sideband_parse_remote_dpcd_write(raw, msg);
+ case=20 DP_REMOTE_I2C_READ:
+ return=20 drm_dp_sideband_parse_remote_i2c_read_ack(raw, msg);
+ case=20 DP_ENUM_PATH_RESOURCES:
+ return=20 drm_dp_sideband_parse_enum_path_resources_ack(raw, msg);
+ case=20 DP_ALLOCATE_PAYLOAD:
+ return=20 drm_dp_sideband_parse_allocate_payload_ack(raw, msg);
+ default:
+ DRM_ERROR("Got unknown reply 0x%02x\n", msg->req_type);
+ =09 return false;
+ }
+}
+
+static bool=20 drm_dp_sideband_parse_connection_status_notify(struct=20 drm_dp_sideband_msg_rx *raw,
+ struct=20 drm_dp_sideband_msg_req_body *msg)
+{
+ int idx =3D 1;
+
+=09 msg->u.conn_stat.port_number =3D (raw->msg[idx] & 0xf0) >>= ; 4;
+ idx++;
+ if (idx > raw->curlen)
+ goto fail_len;+
+ memcpy(msg->u.conn_stat.guid, &raw->msg[idx], 16);
+ idx=20 +=3D 16;
+ if (idx > raw->curlen)
+ goto fail_len;
+
+= =09 msg->u.conn_stat.legacy_device_plug_status =3D (raw->msg[idx]=20 >> 6) & 0x1;
+=09 msg->u.conn_stat.displayport_device_plug_status =3D (raw->msg[idx]=20 >> 5) & 0x1;
+=09 msg->u.conn_stat.message_capability_status =3D (raw->msg[idx]=20 >> 4) & 0x1;
+ msg->u.conn_stat.input_port =3D=20 (raw->msg[idx] >> 3) & 0x1;
+=09 msg->u.conn_stat.peer_device_type =3D (raw->msg[idx] & 0x7);+ idx++;
+ return true;
+fail_len:
+ DRM_DEBUG_KMS("connection=20 status reply parse length fail %d %d\n", idx, raw->curlen);
+=09 return false;
+}
+
+static bool=20 drm_dp_sideband_parse_resource_status_notify(struct=20 drm_dp_sideband_msg_rx *raw,
+ struct=20 drm_dp_sideband_msg_req_body *msg)
+{
+ int idx =3D 1;
+
+=09 msg->u.resource_stat.port_number =3D (raw->msg[idx] & 0xf0)=20 >> 4;
+ idx++;
+ if (idx > raw->curlen)
+ goto=20 fail_len;
+
+ memcpy(msg->u.resource_stat.guid,=20 &raw->msg[idx], 16);
+ idx +=3D 16;
+ if (idx >=20 raw->curlen)
+ goto fail_len;
+
+=09 msg->u.resource_stat.available_pbn =3D (raw->msg[idx] << 8) |= =20 (raw->msg[idx + 1]);
+ idx++;
+ return true;
+fail_len:
+ DRM_DEBUG_KMS("resource status reply parse length fail %d %d\n", idx,=20 raw->curlen);
+ return false;
+}
+
+static bool=20 drm_dp_sideband_parse_req(struct drm_dp_sideband_msg_rx *raw,
+ =20 struct drm_dp_sideband_msg_req_body *msg)
+{
+ memset(msg, 0,=20 sizeof(*msg));
+ msg->req_type =3D (raw->msg[0] & 0x7f);
= +
+ switch (msg->req_type) {
+ case DP_CONNECTION_STATUS_NOTIFY:
+ return drm_dp_sideband_parse_connection_status_notify(raw, msg);
+=09 case DP_RESOURCE_STATUS_NOTIFY:
+ return=20 drm_dp_sideband_parse_resource_status_notify(raw, msg);
+ default:
= + DRM_ERROR("Got unknown request 0x%02x\n", msg->req_type);
+ =09 return false;
+ }
+}
+
+static int build_dpcd_write(struct=20 drm_dp_sideband_msg_tx *msg, u8 port_num, u32 offset, u8 num_bytes, u8=20 *bytes)
+{
+ struct drm_dp_sideband_msg_req_body req;
+
+=09 req.req_type =3D DP_REMOTE_DPCD_WRITE;
+ req.u.dpcd_write.port_number = =3D port_num;
+ req.u.dpcd_write.dpcd_address =3D offset;
+=09 req.u.dpcd_write.num_bytes =3D num_bytes;
+=09 memcpy(req.u.dpcd_write.bytes, bytes, num_bytes);
+=09 drm_dp_encode_sideband_req(&req, msg);
+
+ return 0;
+}
+=
+static int build_link_address(struct drm_dp_sideband_msg_tx *msg)
+{
+=09 struct drm_dp_sideband_msg_req_body req;
+
+ req.req_type =3D=20 DP_LINK_ADDRESS;
+ drm_dp_encode_sideband_req(&req, msg);
+=09 return 0;
+}
+
+static int build_enum_path_resources(struct=20 drm_dp_sideband_msg_tx *msg, int port_num)
+{
+ struct=20 drm_dp_sideband_msg_req_body req;
+
+ req.req_type =3D=20 DP_ENUM_PATH_RESOURCES;
+ req.u.port_num.port_number =3D port_num;
= + drm_dp_encode_sideband_req(&req, msg);

+ msg->path_msg =3D= =20 true;
+ return 0;
+}
+
+static int=20 build_allocate_payload(struct drm_dp_sideband_msg_tx *msg, int port_num,<= br>+ u8 vcpi, uint16_t pbn)
+{
+ struct=20 drm_dp_sideband_msg_req_body req;
+ memset(&req, 0, sizeof(req));<= br>+ req.req_type =3D DP_ALLOCATE_PAYLOAD;
+=09 req.u.allocate_payload.port_number =3D port_num;
+=09 req.u.allocate_payload.vcpi =3D vcpi;
+ req.u.allocate_payload.pbn =3D= =20 pbn;
+ drm_dp_encode_sideband_req(&req, msg);
+=09 msg->path_msg =3D true;
+ return 0;
+}
+
+static int=20 drm_dp_mst_assign_payload_id(struct drm_dp_mst_topology_mgr *mgr,
+ =09 struct drm_dp_vcpi *vcpi)
+{
+ int ret;
+
+=09 mutex_lock(&mgr->payload_lock);
+ ret =3D=20 find_first_zero_bit(&mgr->payload_mask, mgr->max_payloads +=20 1);
+ if (ret > mgr->max_payloads) {
+ ret =3D -EINVAL;
+= =09 DRM_DEBUG_KMS("out of payload ids %d\n", ret);
+ goto out_unlock;+ }
+
+ set_bit(ret, &mgr->payload_mask);
+ vcpi->vcpi =3D ret;
+ mgr->proposed_vcpis[ret - 1] =3D vcpi;
+out_unlock:<= br>+ mutex_unlock(&mgr->payload_lock);
+ return ret;
+}
+
= +static void drm_dp_mst_put_payload_id(struct drm_dp_mst_topology_mgr *mgr,
+ int id)
+{
+ if (id =3D=3D 0)
+ return;
+
+=09 mutex_lock(&mgr->payload_lock);
+ DRM_DEBUG_KMS("putting=20 payload %d\n", id);
+ clear_bit(id, &mgr->payload_mask);
+=09 mgr->proposed_vcpis[id - 1] =3D NULL;
+=09 mutex_unlock(&mgr->payload_lock);
+}
+
+static bool=20 check_txmsg_state(struct drm_dp_mst_topology_mgr *mgr,
+ =20 struct drm_dp_sideband_msg_tx *txmsg)
+{
+ bool ret;
+=09 mutex_lock(&mgr->qlock);
+ ret =3D (txmsg->state =3D=3D=20 DRM_DP_SIDEBAND_TX_RX ||
+ txmsg->state =3D=3D=20 DRM_DP_SIDEBAND_TX_TIMEOUT);
+ mutex_unlock(&mgr->qlock);
+ return ret;
+}
+
+static int drm_dp_mst_wait_tx_reply(struct=20 drm_dp_mst_branch *mstb,
+ struct drm_dp_sideband_msg_tx=20 *txmsg)
+{
+ struct drm_dp_mst_topology_mgr *mgr =3D mstb->mgr;<= br>+ int ret;
+
+ ret =3D wait_event_timeout(mgr->tx_waitq,
+ = =20 check_txmsg_state(mgr, txmsg),
+ (4 * HZ));
+=09 mutex_lock(&mstb->mgr->qlock);
+ if (ret > 0) {
+ if (txmsg->state =3D=3D DRM_DP_SIDEBAND_TX_TIMEOUT) {
+ ret =3D -EI= O;
+ goto out;
+ }
+ } else {
+ DRM_DEBUG_KMS("timedout msg=20 send %p %d %d\n", txmsg, txmsg->state, txmsg->seqno);
+
+ =09 /* dump some state */
+ ret =3D -EIO;
+
+ /* remove from q */<= br>+ if (txmsg->state =3D=3D DRM_DP_SIDEBAND_TX_QUEUED ||
+ =20 txmsg->state =3D=3D DRM_DP_SIDEBAND_TX_START_SEND) {
+ =09 list_del(&txmsg->next);
+ }
+
+ if (txmsg->state =3D= =3D DRM_DP_SIDEBAND_TX_START_SEND ||
+ txmsg->state =3D=3D=20 DRM_DP_SIDEBAND_TX_SENT) {
+ mstb->tx_slots[txmsg->seqno] =3D=20 NULL;
+ }
+ }
+out:
+ mutex_unlock(&mgr->qlock);
+=
+ return ret;
+}
+
+static struct drm_dp_mst_branch=20 *drm_dp_add_mst_branch_device(u8 lct, u8 *rad)
+{
+ struct=20 drm_dp_mst_branch *mstb;
+
+ mstb =3D kzalloc(sizeof(*mstb),=20 GFP_KERNEL);
+ if (!mstb)
+ return NULL;
+
+ mstb->lct =3D lct;
+ if (lct > 1)
+ memcpy(mstb->rad, rad, lct / 2);
+ INIT_LIST_HEAD(&mstb->ports);
+=09 kref_init(&mstb->kref);
+ return mstb;
+}
+
+static=20 void drm_dp_destroy_mst_branch_device(struct kref *kref)
+{
+=09 struct drm_dp_mst_branch *mstb =3D container_of(kref, struct=20 drm_dp_mst_branch, kref);
+ struct drm_dp_mst_port *port, *tmp;
+=09 bool wake_tx =3D false;
+
+=09 cancel_work_sync(&mstb->mgr->work);
+
+ /*
+ *=20 destroy all ports - don't need lock
+ * as there are no more=20 references to the mst branch
+ * device at this point.
+ */
+ list_for_each_entry_safe(port, tmp, &mstb->ports, next) {
+ =09 list_del(&port->next);
+ drm_dp_put_port(port);
+ }
++ /* drop any tx slots msg */
+=09 mutex_lock(&mstb->mgr->qlock);
+ if (mstb->tx_slots[0]) {=
+ mstb->tx_slots[0]->state =3D DRM_DP_SIDEBAND_TX_TIMEOUT;
+ =09 mstb->tx_slots[0] =3D NULL;
+ wake_tx =3D true;
+ }
+ if=20 (mstb->tx_slots[1]) {
+ mstb->tx_slots[1]->state =3D=20 DRM_DP_SIDEBAND_TX_TIMEOUT;
+ mstb->tx_slots[1] =3D NULL;
+ =09 wake_tx =3D true;
+ }
+ mutex_unlock(&mstb->mgr->qlock);<= br>+
+ if (wake_tx)
+ wake_up(&mstb->mgr->tx_waitq);
+=09 kfree(mstb);
+}
+
+static void=20 drm_dp_put_mst_branch_device(struct drm_dp_mst_branch *mstb)
+{
+=09 kref_put(&mstb->kref, drm_dp_destroy_mst_branch_device);
+}
= +
+
+static void drm_dp_port_teardown_pdt(struct drm_dp_mst_port *port, int=20 old_pdt)
+{
+ switch (old_pdt) {
+ case=20 DP_PEER_DEVICE_DP_LEGACY_CONV:
+ case DP_PEER_DEVICE_SST_SINK:
+ =09 /* remove i2c over sideband */
+ =09 drm_dp_mst_unregister_i2c_bus(&port->aux);
+ break;
+ case DP_PEER_DEVICE_MST_BRANCHING:
+ =09 drm_dp_put_mst_branch_device(port->mstb);
+ port->mstb =3D NULL= ;
+ break;
+ }
+}
+
+static void drm_dp_destroy_port(struct=20 kref *kref)
+{
+ struct drm_dp_mst_port *port =3D container_of(kref= , struct drm_dp_mst_port, kref);
+ struct drm_dp_mst_topology_mgr *mgr =3D port->mgr;
+ if (!port->input) {
+ =09 port->vcpi.num_slots =3D 0;
+ if (port->connector)
+ =09 (*port->mgr->cbs->destroy_connector)(mgr, port->connector);+ drm_dp_port_teardown_pdt(port, port->pdt);
+
+ if=20 (!port->input && port->vcpi.vcpi > 0)
+ =09 drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi);
+ }
+=09 kfree(port);
+}
+
+static void drm_dp_put_port(struct=20 drm_dp_mst_port *port)
+{
+ kref_put(&port->kref,=20 drm_dp_destroy_port);
+}
+
+static struct drm_dp_mst_branch=20 *drm_dp_mst_get_validated_mstb_ref_locked(struct drm_dp_mst_branch=20 *mstb, struct drm_dp_mst_branch *to_find)
+{
+ struct=20 drm_dp_mst_port *port;
+ struct drm_dp_mst_branch *rmstb;
+ if=20 (to_find =3D=3D mstb) {
+ kref_get(&mstb->kref);
+ return=20 mstb;
+ }
+ list_for_each_entry(port, &mstb->ports, next) {<= br>+ if (port->mstb) {
+ rmstb =3D=20 drm_dp_mst_get_validated_mstb_ref_locked(port->mstb, to_find);
+ =09 if (rmstb)
+ return rmstb;
+ }
+ }
+ return NULL;
+}<= br>+
+static struct drm_dp_mst_branch *drm_dp_get_validated_mstb_ref(struct=20 drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_branch *mstb)
+{
+ struct drm_dp_mst_branch *rmstb =3D NULL;
+=09 mutex_lock(&mgr->lock);
+ if (mgr->mst_primary)
+ rmstb =3D drm_dp_mst_get_validated_mstb_ref_locked(mgr->mst_primary, mstb);=
+ mutex_unlock(&mgr->lock);
+ return rmstb;
+}
+
+stati= c struct drm_dp_mst_port *drm_dp_mst_get_port_ref_locked(struct=20 drm_dp_mst_branch *mstb, struct drm_dp_mst_port *to_find)
+{
+=09 struct drm_dp_mst_port *port, *mport;
+
+=09 list_for_each_entry(port, &mstb->ports, next) {
+ if (port =3D= =3D to_find) {
+ kref_get(&port->kref);
+ return port;
+ }
+ if (port->mstb) {
+ mport =3D=20 drm_dp_mst_get_port_ref_locked(port->mstb, to_find);
+ if=20 (mport)
+ return mport;
+ }
+ }
+ return NULL;
+}
+=
+static struct drm_dp_mst_port *drm_dp_get_validated_port_ref(struct=20 drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port)
+{
+=09 struct drm_dp_mst_port *rport =3D NULL;
+=09 mutex_lock(&mgr->lock);
+ if (mgr->mst_primary)
+ rport =3D drm_dp_mst_get_port_ref_locked(mgr->mst_primary, port);
+=09 mutex_unlock(&mgr->lock);
+ return rport;
+}
+
+static struct drm_dp_mst_port *drm_dp_get_port(struct drm_dp_mst_branch *mstb, u8 port_num)
+{
+ struct drm_dp_mst_port *port;
+
+=09 list_for_each_entry(port, &mstb->ports, next) {
+ if=20 (port->port_num =3D=3D port_num) {
+ kref_get(&port->kref)= ;
+ return port;
+ }
+ }
+
+ return NULL;
+}
+
+/*+ * calculate a new RAD for this MST branch device
+ * if parent has=20 an LCT of 2 then it has 1 nibble of RAD,
+ * if parent has an LCT of 3 then it has 2 nibbles of RAD,
+ */
+static u8=20 drm_dp_calculate_rad(struct drm_dp_mst_port *port,
+ u8 *rad)
+= {
+ int lct =3D port->parent->lct;
+ int shift =3D 4;
+ int idx = =3D=20 lct / 2;
+ if (lct > 1) {
+ memcpy(rad,=20 port->parent->rad, idx);
+ shift =3D (lct % 2) ? 4 : 0;
+ }=20 else
+ rad[0] =3D 0;
+
+ rad[idx] |=3D port->port_num <&l= t;=20 shift;
+ return lct + 1;
+}
+
+/*
+ * return sends link=20 address for new mstb
+ */
+static bool=20 drm_dp_port_setup_pdt(struct drm_dp_mst_port *port)
+{
+ int ret;+ u8 rad[6], lct;
+ bool send_link =3D false;
+ switch (port->pdt= ) {
+ case DP_PEER_DEVICE_DP_LEGACY_CONV:
+ case=20 DP_PEER_DEVICE_SST_SINK:
+ /* add i2c over sideband */
+ ret =3D=20 drm_dp_mst_register_i2c_bus(&port->aux);
+ break;
+ case=20 DP_PEER_DEVICE_MST_BRANCHING:
+ lct =3D drm_dp_calculate_rad(port,=20 rad);
+
+ port->mstb =3D drm_dp_add_mst_branch_device(lct, rad)= ;
+ port->mstb->mgr =3D port->mgr;
+ =09 port->mstb->port_parent =3D port;
+
+ send_link =3D true;+=09 break;
+ }
+ return send_link;
+}
+
+static void=20 drm_dp_check_port_guid(struct drm_dp_mst_branch *mstb,
+ struct drm_dp_mst_port *port)
+{
+ int ret;
+ if (port->dpcd_rev=20 >=3D 0x12) {
+ port->guid_valid =3D=20 drm_dp_validate_guid(mstb->mgr, port->guid);
+ if=20 (!port->guid_valid) {
+ ret =3D=20 drm_dp_send_dpcd_write(mstb->mgr,
+ port,
+ =20 DP_GUID,
+ 16, port->guid);
+ port->guid_valid =3D true;
+ }
+ }
+}
+
+static void=20 build_mst_prop_path(struct drm_dp_mst_port *port,
+ struct=20 drm_dp_mst_branch *mstb,
+ char *proppath)
+{
+ int i;
+=09 char temp[8];
+ snprintf(proppath, 255, "mst:%d",=20 mstb->mgr->conn_base_id);
+ for (i =3D 0; i < (mstb->lct -= =20 1); i++) {
+ int shift =3D (i % 2) ? 0 : 4;
+ int port_num =3D=20 mstb->rad[i / 2] >> shift;
+ snprintf(temp, 8, "-%d",=20 port_num);
+ strncat(proppath, temp, 255);
+ }
+=09 snprintf(temp, 8, "-%d", port->port_num);
+ strncat(proppath,=20 temp, 255);
+}
+
+static void drm_dp_add_port(struct=20 drm_dp_mst_branch *mstb,
+ struct device *dev,
+ =20 struct drm_dp_link_addr_reply_port *port_msg)
+{
+ struct=20 drm_dp_mst_port *port;
+ bool ret;
+ bool created =3D false;
+=09 int old_pdt =3D 0;
+ int old_ddps =3D 0;
+ port =3D=20 drm_dp_get_port(mstb, port_msg->port_number);
+ if (!port) {
+=09 port =3D kzalloc(sizeof(*port), GFP_KERNEL);
+ if (!port)
+ =09 return;
+ kref_init(&port->kref);
+ port->parent =3D=20 mstb;
+ port->port_num =3D port_msg->port_number;
+ =09 port->mgr =3D mstb->mgr;
+ port->aux.name =3D "DPMST";
+ = =09 port->aux.dev =3D dev;
+ created =3D true;
+ } else {
+ =09 old_pdt =3D port->pdt;
+ old_ddps =3D port->ddps;
+ }
++ port->pdt =3D port_msg->peer_device_type;
+ port->input =3D=20 port_msg->input_port;
+ port->mcs =3D port_msg->mcs;
+=09 port->ddps =3D port_msg->ddps;
+ port->ldps =3D=20 port_msg->legacy_device_plug_status;
+ port->dpcd_rev =3D=20 port_msg->dpcd_revision;
+
+ memcpy(port->guid,=20 port_msg->peer_guid, 16);
+
+ /* manage mstb port lists with=20 mgr lock - take a reference
+ for this list */
+ if (created) {<= br>+ mutex_lock(&mstb->mgr->lock);
+ =09 kref_get(&port->kref);
+ list_add(&port->next,=20 &mstb->ports);
+ mutex_unlock(&mstb->mgr->lock);
= + }
+
+ if (old_ddps !=3D port->ddps) {
+ if (port->ddps) = {
+ drm_dp_check_port_guid(mstb, port);
+ if (!port->input)
+=09 drm_dp_send_enum_path_resources(mstb->mgr, mstb, port);
+ }=20 else {
+ port->guid_valid =3D false;
+ port->available_pb= n =3D 0;
+ }
+ }
+
+ if (old_pdt !=3D port->pdt &&= ;=20 !port->input) {
+ drm_dp_port_teardown_pdt(port, old_pdt);
++ ret =3D drm_dp_port_setup_pdt(port);
+ if (ret =3D=3D true) {
+ = =09 drm_dp_send_link_address(mstb->mgr, port->mstb);
+ =09 port->mstb->link_address_sent =3D true;
+ }
+ }
+
+ if= =20 (created && !port->input) {
+ char proppath[255];
+ =09 build_mst_prop_path(port, mstb, proppath);
+ port->connector =3D=20 (*mstb->mgr->cbs->add_connector)(mstb->mgr, port, proppath);<= br>+ }
+
+ /* put reference to this port */
+=09 drm_dp_put_port(port);
+}
+
+static void=20 drm_dp_update_port(struct drm_dp_mst_branch *mstb,
+ struct=20 drm_dp_connection_status_notify *conn_stat)
+{
+ struct=20 drm_dp_mst_port *port;
+ int old_pdt;
+ int old_ddps;
+ bool=20 dowork =3D false;
+ port =3D drm_dp_get_port(mstb,=20 conn_stat->port_number);
+ if (!port)
+ return;
+
+=09 old_ddps =3D port->ddps;
+ old_pdt =3D port->pdt;
+ port->= pdt =3D conn_stat->peer_device_type;
+ port->mcs =3D=20 conn_stat->message_capability_status;
+ port->ldps =3D=20 conn_stat->legacy_device_plug_status;
+ port->ddps =3D=20 conn_stat->displayport_device_plug_status;
+
+ if (old_ddps !=3D= =20 port->ddps) {
+ if (port->ddps) {
+ =09 drm_dp_check_port_guid(mstb, port);
+ dowork =3D true;
+ } else = {
+ port->guid_valid =3D false;
+ port->available_pbn =3D 0;+=09 }
+ }
+ if (old_pdt !=3D port->pdt && !port->input) = {
+ drm_dp_port_teardown_pdt(port, old_pdt);
+
+ if=20 (drm_dp_port_setup_pdt(port))
+ dowork =3D true;
+ }
+
+=09 drm_dp_put_port(port);
+ if (dowork)
+ queue_work(system_long_wq, &mstb->mgr->work);
+
+}
+
+static struct=20 drm_dp_mst_branch *drm_dp_get_mst_branch_device(struct=20 drm_dp_mst_topology_mgr *mgr,
+ u8 lct, u8 *rad)
+{+ struct drm_dp_mst_branch *mstb;
+ struct drm_dp_mst_port *port;
+ int i;
+ /* find the port by iterating down */
+ mstb =3D=20 mgr->mst_primary;
+
+ for (i =3D 0; i < lct - 1; i++) {
+ = =09 int shift =3D (i % 2) ? 0 : 4;
+ int port_num =3D rad[i / 2] >>= =20 shift;
+
+ list_for_each_entry(port, &mstb->ports, next) {<= br>+ if (port->port_num =3D=3D port_num) {
+ if (!port->mstb) {=
+ DRM_ERROR("failed to lookup MSTB with lct %d, rad %02x\n", lct,=20 rad[0]);
+ return NULL;
+ }
+
+ mstb =3D=20 port->mstb;
+ break;
+ }
+ }
+ }
+=09 kref_get(&mstb->kref);
+ return mstb;
+}
+
+static=20 void drm_dp_check_and_send_link_address(struct drm_dp_mst_topology_mgr=20 *mgr,
+ struct drm_dp_mst_branch *mstb)
+{
+ struct=20 drm_dp_mst_port *port;
+
+ if (!mstb->link_address_sent) {
+ drm_dp_send_link_address(mgr, mstb);
+ mstb->link_address_sent =3D true;
+ }
+ list_for_each_entry(port, &mstb->ports, next) {=
+ if (port->input)
+ continue;
+
+ if (!port->ddps)
= + continue;
+
+ if (!port->available_pbn)
+ =09 drm_dp_send_enum_path_resources(mgr, mstb, port);
+
+ if=20 (port->mstb)
+ drm_dp_check_and_send_link_address(mgr,=20 port->mstb);
+ }
+}
+
+static void=20 drm_dp_mst_link_probe_work(struct work_struct *work)
+{
+ struct=20 drm_dp_mst_topology_mgr *mgr =3D container_of(work, struct=20 drm_dp_mst_topology_mgr, work);
+
+=09 drm_dp_check_and_send_link_address(mgr, mgr->mst_primary);
+
+}<= br>+
+static bool drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr,
+ =09 u8 *guid)
+{
+ static u8 zero_guid[16];
+
+ if=20 (!memcmp(guid, zero_guid, 16)) {
+ u64 salt =3D get_jiffies_64();
= + memcpy(&guid[0], &salt, sizeof(u64));
+ =09 memcpy(&guid[8], &salt, sizeof(u64));
+ return false;
+ }<= br>+ return true;
+}
+
+#if 0
+static int build_dpcd_read(struct drm_dp_sideband_msg_tx *msg, u8 port_num, u32 offset, u8 num_bytes)
+= {
+ struct drm_dp_sideband_msg_req_body req;
+
+ req.req_type =3D=20 DP_REMOTE_DPCD_READ;
+ req.u.dpcd_read.port_number =3D port_num;
+=09 req.u.dpcd_read.dpcd_address =3D offset;
+ req.u.dpcd_read.num_bytes =3D= =20 num_bytes;
+ drm_dp_encode_sideband_req(&req, msg);
+
+=09 return 0;
+}
+#endif
+
+static int=20 drm_dp_send_sideband_msg(struct drm_dp_mst_topology_mgr *mgr,
+ =20 bool up, u8 *msg, int len)
+{
+ int ret;
+ int regbase =3D up = ? DP_SIDEBAND_MSG_UP_REP_BASE : DP_SIDEBAND_MSG_DOWN_REQ_BASE;
+ int=20 tosend, total, offset;
+ int retries =3D 0;
+
+retry:
+ total= =3D len;
+ offset =3D 0;
+ do {
+ tosend =3D=20 min3(mgr->max_dpcd_transaction_bytes, 16, total);
+
+ =09 mutex_lock(&mgr->aux_lock);
+ ret =3D=20 drm_dp_dpcd_write(mgr->aux, regbase + offset,
+ =09 &msg[offset],
+ tosend);
+ =09 mutex_unlock(&mgr->aux_lock);
+ if (ret !=3D tosend) {
+ =09 if (ret =3D=3D -EIO && retries < 5) {
+ retries++;
+ = =09 goto retry;
+ }
+ DRM_DEBUG_KMS("failed to dpcd write %d=20 %d\n", tosend, ret);
+ WARN(1, "fail\n");
+
+ return -EIO;+ }
+ offset +=3D tosend;
+ total -=3D tosend;
+ } while (tota= l=20 > 0);
+ return 0;
+}
+
+static int=20 set_hdr_from_dst_qlock(struct drm_dp_sideband_msg_hdr *hdr,
+ =20 struct drm_dp_sideband_msg_tx *txmsg)
+{
+ struct=20 drm_dp_mst_branch *mstb =3D txmsg->dst;
+
+ /* both msg slots ar= e full */
+ if (txmsg->seqno =3D=3D -1) {
+ if=20 (mstb->tx_slots[0] && mstb->tx_slots[1]) {
+ =09 DRM_DEBUG_KMS("%s: failed to find slot\n", __func__);
+ return=20 -EAGAIN;
+ }
+ if (mstb->tx_slots[0] =3D=3D NULL &&=20 mstb->tx_slots[1] =3D=3D NULL) {
+ txmsg->seqno =3D=20 mstb->last_seqno;
+ mstb->last_seqno ^=3D 1;
+ } else if=20 (mstb->tx_slots[0] =3D=3D NULL)
+ txmsg->seqno =3D 0;
+ el= se
+ txmsg->seqno =3D 1;
+ mstb->tx_slots[txmsg->seqno] =3D=20 txmsg;
+ }
+ hdr->broadcast =3D 0;
+ hdr->path_msg =3D=20 txmsg->path_msg;
+ hdr->lct =3D mstb->lct;
+ hdr->lcr =3D= =20 mstb->lct - 1;
+ if (mstb->lct > 1)
+ =09 memcpy(hdr->rad, mstb->rad, mstb->lct / 2);
+ hdr->seqno =3D txmsg->seqno;
+ return 0;
+}
+/*
+ * process a single=20 block of the next message in the sideband queue
+ */
+static int=20 process_single_tx_qlock(struct drm_dp_mst_topology_mgr *mgr,
+ =20 struct drm_dp_sideband_msg_tx *txmsg,
+ bool up)
+{
+ u8=20 chunk[48];
+ struct drm_dp_sideband_msg_hdr hdr;
+ int len, space, idx, tosend;
+ int ret;
+
+ if (txmsg->state =3D=3D=20 DRM_DP_SIDEBAND_TX_QUEUED) {
+ txmsg->seqno =3D -1;
+ =09 txmsg->state =3D DRM_DP_SIDEBAND_TX_START_SEND;
+ }
+
+ /*=20 make hdr from dst mst - for replies use seqno
+ otherwise assign=20 one */
+ ret =3D set_hdr_from_dst_qlock(&hdr, txmsg);
+ if (ret= =20 < 0)
+ return ret;
+
+ /* amount left to send in this=20 message */
+ len =3D txmsg->cur_len - txmsg->cur_offset;
++ /* 48 - sideband msg size - 1 byte for data CRC, x header bytes */
+ space =3D 48 - 1 - drm_dp_calc_sb_hdr_size(&hdr);
+
+ tosend =3D= =20 min(len, space);
+ if (len =3D=3D txmsg->cur_len)
+ hdr.somt =3D= 1;
+ if (space >=3D len)
+ hdr.eomt =3D 1;
+
+
+ hdr.msg_len = =3D=20 tosend + 1;
+ drm_dp_encode_sideband_msg_hdr(&hdr, chunk,=20 &idx);
+ memcpy(&chunk[idx],=20 &txmsg->msg[txmsg->cur_offset], tosend);
+ /* add crc at=20 end */
+ drm_dp_crc_sideband_chunk_req(&chunk[idx], tosend);
+ idx +=3D tosend + 1;
+
+ ret =3D drm_dp_send_sideband_msg(mgr, up,= =20 chunk, idx);
+ if (ret) {
+ DRM_DEBUG_KMS("sideband msg failed to send\n");
+ return ret;
+ }
+
+ txmsg->cur_offset +=3D=20 tosend;
+ if (txmsg->cur_offset =3D=3D txmsg->cur_len) {
+ =09 txmsg->state =3D DRM_DP_SIDEBAND_TX_SENT;
+ return 1;
+ }
+=09 return 0;
+}
+
+/* must be called holding qlock */
+static=20 void process_single_down_tx_qlock(struct drm_dp_mst_topology_mgr *mgr)+{
+ struct drm_dp_sideband_msg_tx *txmsg;
+ int ret;
+
+ /*=20 construct a chunk from the first msg in the tx_msg queue */
+ if=20 (list_empty(&mgr->tx_msg_downq)) {
+ =09 mgr->tx_down_in_progress =3D false;
+ return;
+ }
+=09 mgr->tx_down_in_progress =3D true;
+
+ txmsg =3D=20 list_first_entry(&mgr->tx_msg_downq, struct=20 drm_dp_sideband_msg_tx, next);
+ ret =3D process_single_tx_qlock(mgr,=20 txmsg, false);
+ if (ret =3D=3D 1) {
+ /* txmsg is sent it should = be=20 in the slots now */
+ list_del(&txmsg->next);
+ } else if=20 (ret) {
+ DRM_DEBUG_KMS("failed to send msg in q %d\n", ret);
+ =09 list_del(&txmsg->next);
+ if (txmsg->seqno !=3D -1)
+ =09 txmsg->dst->tx_slots[txmsg->seqno] =3D NULL;
+ =09 txmsg->state =3D DRM_DP_SIDEBAND_TX_TIMEOUT;
+ =09 wake_up(&mgr->tx_waitq);
+ }
+ if=20 (list_empty(&mgr->tx_msg_downq)) {
+ =09 mgr->tx_down_in_progress =3D false;
+ return;
+ }
+}
++/* called holding qlock */
+static void=20 process_single_up_tx_qlock(struct drm_dp_mst_topology_mgr *mgr)
+{
= + struct drm_dp_sideband_msg_tx *txmsg;
+ int ret;
+
+ /*=20 construct a chunk from the first msg in the tx_msg queue */
+ if=20 (list_empty(&mgr->tx_msg_upq)) {
+ mgr->tx_up_in_progress =3D false;
+ return;
+ }
+
+ txmsg =3D=20 list_first_entry(&mgr->tx_msg_upq, struct drm_dp_sideband_msg_tx, next);
+ ret =3D process_single_tx_qlock(mgr, txmsg, true);
+ if=20 (ret =3D=3D 1) {
+ /* up txmsgs aren't put in slots - so free after w= e=20 send it */
+ list_del(&txmsg->next);
+ kfree(txmsg);
+ } else if (ret)
+ DRM_DEBUG_KMS("failed to send msg in q %d\n",=20 ret);
+ mgr->tx_up_in_progress =3D true;
+}
+
+static void= =20 drm_dp_queue_down_tx(struct drm_dp_mst_topology_mgr *mgr,
+ =20 struct drm_dp_sideband_msg_tx *txmsg)
+{
+=09 mutex_lock(&mgr->qlock);
+ list_add_tail(&txmsg->next,=20 &mgr->tx_msg_downq);
+ if (!mgr->tx_down_in_progress)
+=09 process_single_down_tx_qlock(mgr);
+=09 mutex_unlock(&mgr->qlock);
+}
+
+static int=20 drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,
+ =20 struct drm_dp_mst_branch *mstb)
+{
+ int len;
+ struct=20 drm_dp_sideband_msg_tx *txmsg;
+ int ret;
+
+ txmsg =3D=20 kzalloc(sizeof(*txmsg), GFP_KERNEL);
+ if (!txmsg)
+ return=20 -ENOMEM;
+
+ txmsg->dst =3D mstb;
+ len =3D=20 build_link_address(txmsg);
+
+ drm_dp_queue_down_tx(mgr, txmsg);+
+ ret =3D drm_dp_mst_wait_tx_reply(mstb, txmsg);
+ if (ret > 0) {+ int i;
+
+ if (txmsg->reply.reply_type =3D=3D 1)
+ =09 DRM_DEBUG_KMS("link address nak received\n");
+ else {
+ =09 DRM_DEBUG_KMS("link address reply: %d\n",=20 txmsg->reply.u.link_addr.nports);
+ for (i =3D 0; i <=20 txmsg->reply.u.link_addr.nports; i++) {
+ DRM_DEBUG_KMS("port=20 %d: input %d, pdt: %d, pn: %d, dpcd_rev: %02x, mcs: %d, ddps: %d, ldps=20 %d\n", i,
+ =20 txmsg->reply.u.link_addr.ports[i].input_port,
+ =20 txmsg->reply.u.link_addr.ports[i].peer_device_type,
+ =20 txmsg->reply.u.link_addr.ports[i].port_number,
+ =20 txmsg->reply.u.link_addr.ports[i].dpcd_revision,
+ =20 txmsg->reply.u.link_addr.ports[i].mcs,
+ =20 txmsg->reply.u.link_addr.ports[i].ddps,
+ =20 txmsg->reply.u.link_addr.ports[i].legacy_device_plug_status);
+ }=
+ for (i =3D 0; i < txmsg->reply.u.link_addr.nports; i++) {
+ = =09 drm_dp_add_port(mstb, mgr->dev,=20 &txmsg->reply.u.link_addr.ports[i]);
+ }
+ =09 (*mgr->cbs->hotplug)(mgr);
+ }
+ } else
+ =09 DRM_DEBUG_KMS("link address failed %d\n", ret);
+
+ kfree(txmsg);+ return 0;
+}
+
+static int=20 drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,
+ struct drm_dp_mst_branch *mstb,
+ struct=20 drm_dp_mst_port *port)
+{
+ int len;
+ struct=20 drm_dp_sideband_msg_tx *txmsg;
+ int ret;
+
+ txmsg =3D=20 kzalloc(sizeof(*txmsg), GFP_KERNEL);
+ if (!txmsg)
+ return=20 -ENOMEM;
+
+ txmsg->dst =3D mstb;
+ len =3D=20 build_enum_path_resources(txmsg, port->port_num);
+
+=09 drm_dp_queue_down_tx(mgr, txmsg);
+
+ ret =3D=20 drm_dp_mst_wait_tx_reply(mstb, txmsg);
+ if (ret > 0) {
+ if=20 (txmsg->reply.reply_type =3D=3D 1)
+ DRM_DEBUG_KMS("enum path=20 resources nak received\n");
+ else {
+ if (port->port_num !=3D txmsg->reply.u.path_resources.port_number)
+ DRM_ERROR("got=20 incorrect port in response\n");
+ DRM_DEBUG_KMS("enum path=20 resources %d: %d %d\n", txmsg->reply.u.path_resources.port_number,=20 txmsg->reply.u.path_resources.full_payload_bw_number,
+ =20 txmsg->reply.u.path_resources.avail_payload_bw_number);
+ =09 port->available_pbn =3D=20 txmsg->reply.u.path_resources.avail_payload_bw_number;
+ }
+ }<= br>+
+ kfree(txmsg);
+ return 0;
+}
+
+int=20 drm_dp_payload_send_msg(struct drm_dp_mst_topology_mgr *mgr,
+ =20 struct drm_dp_mst_port *port,
+ int id,
+ int pbn)
+= {
+ struct drm_dp_sideband_msg_tx *txmsg;
+ struct drm_dp_mst_branch=20 *mstb;
+ int len, ret;
+
+ mstb =3D=20 drm_dp_get_validated_mstb_ref(mgr, port->parent);
+ if (!mstb)
+ return -EINVAL;
+
+ txmsg =3D kzalloc(sizeof(*txmsg), GFP_KERNEL)= ;
+ if (!txmsg) {
+ ret =3D -ENOMEM;
+ goto fail_put;
+ }
++ txmsg->dst =3D mstb;
+ len =3D build_allocate_payload(txmsg,=20 port->port_num,
+ id,
+ pbn);
+
+=09 drm_dp_queue_down_tx(mgr, txmsg);
+
+ ret =3D=20 drm_dp_mst_wait_tx_reply(mstb, txmsg);
+ if (ret > 0) {
+ if=20 (txmsg->reply.reply_type =3D=3D 1) {
+ ret =3D -EINVAL;
+ } e= lse
+ ret =3D 0;
+ }
+ kfree(txmsg);
+fail_put:
+=09 drm_dp_put_mst_branch_device(mstb);
+ return ret;
+}
+
+stati= c int drm_dp_create_payload_step1(struct drm_dp_mst_topology_mgr *mgr,
= + int id,
+ struct drm_dp_payload *payload)
+{+ int ret;
+
+ ret =3D drm_dp_dpcd_write_payload(mgr, id, payload);<= br>+ if (ret < 0) {
+ payload->payload_state =3D 0;
+ return=20 ret;
+ }
+ payload->payload_state =3D DP_PAYLOAD_LOCAL;
+=09 return 0;
+}
+
+int drm_dp_create_payload_step2(struct=20 drm_dp_mst_topology_mgr *mgr,
+ struct drm_dp_mst_port *port,
+ int id,
+ struct drm_dp_payload *payload)
+{
+ int ret;+ ret =3D drm_dp_payload_send_msg(mgr, port, id, port->vcpi.pbn);
+=09 if (ret < 0)
+ return ret;
+ payload->payload_state =3D=20 DP_PAYLOAD_REMOTE;
+ return ret;
+}
+
+int=20 drm_dp_destroy_payload_step1(struct drm_dp_mst_topology_mgr *mgr,
+ =09 struct drm_dp_mst_port *port,
+ int id,
+ struct=20 drm_dp_payload *payload)
+{
+ DRM_DEBUG_KMS("\n");
+ /* its=20 okay for these to fail */
+ if (port) {
+ =09 drm_dp_payload_send_msg(mgr, port, id, 0);
+ }
+
+=09 drm_dp_dpcd_write_payload(mgr, id, payload);
+=09 payload->payload_state =3D 0;
+ return 0;
+}
+
+int=20 drm_dp_destroy_payload_step2(struct drm_dp_mst_topology_mgr *mgr,
+ =09 int id,
+ struct drm_dp_payload *payload)
+{
+=09 payload->payload_state =3D 0;
+ return 0;
+}
+
+/**
+ *= =20 drm_dp_update_payload_part1() - Execute payload update part 1
+ *=20 @mgr: manager to use.
+ *
+ * This iterates over all proposed=20 virtual channels, and tries to
+ * allocate space in the link for=20 them. For 0->slots transitions,
+ * this step just writes the VCPI to the MST device. For slots->0
+ * transitions, this writes the=20 updated VCPIs and removes the
+ * remote VC payloads.
+ *
+ *=20 after calling this the driver should generate ACT and payload
+ *=20 packets.
+ */
+int drm_dp_update_payload_part1(struct=20 drm_dp_mst_topology_mgr *mgr)
+{
+ int i;
+ int cur_slots =3D 1;=
+ struct drm_dp_payload req_payload;
+ struct drm_dp_mst_port *port;+
+ mutex_lock(&mgr->payload_lock);
+ for (i =3D 0; i <=20 mgr->max_payloads; i++) {
+ /* solve the current payloads -=20 compare to the hw ones
+ - update the hw view */
+ =09 req_payload.start_slot =3D cur_slots;
+ if (mgr->proposed_vcpis[i]= ) {
+ port =3D container_of(mgr->proposed_vcpis[i], struct=20 drm_dp_mst_port, vcpi);
+ req_payload.num_slots =3D=20 mgr->proposed_vcpis[i]->num_slots;
+ } else {
+ port =3D=20 NULL;
+ req_payload.num_slots =3D 0;
+ }
+ /* work out what=20 is required to happen with this payload */
+ if=20 (mgr->payloads[i].start_slot !=3D req_payload.start_slot ||
+ =20 mgr->payloads[i].num_slots !=3D req_payload.num_slots) {
+
+ /= * need to push an update for this payload */
+ if=20 (req_payload.num_slots) {
+ drm_dp_create_payload_step1(mgr, i +=20 1, &req_payload);
+ mgr->payloads[i].num_slots =3D=20 req_payload.num_slots;
+ } else if (mgr->payloads[i].num_slots) {=
+ mgr->payloads[i].num_slots =3D 0;
+ =09 drm_dp_destroy_payload_step1(mgr, port, i + 1,=20 &mgr->payloads[i]);
+ req_payload.payload_state =3D=20 mgr->payloads[i].payload_state;
+ }
+ =09 mgr->payloads[i].start_slot =3D req_payload.start_slot;
+ =09 mgr->payloads[i].payload_state =3D req_payload.payload_state;
+ }<= br>+ cur_slots +=3D req_payload.num_slots;
+ }
+=09 mutex_unlock(&mgr->payload_lock);
+
+ return 0;
+}
+EX= PORT_SYMBOL(drm_dp_update_payload_part1);
+
+/**
+ * drm_dp_update_payload_part2() - Execute payload update part 2
+ *=20 @mgr: manager to use.
+ *
+ * This iterates over all proposed=20 virtual channels, and tries to
+ * allocate space in the link for=20 them. For 0->slots transitions,
+ * this step writes the remote VC payload commands. For slots->0
+ * this just resets some internal state.
+ */
+int drm_dp_update_payload_part2(struct=20 drm_dp_mst_topology_mgr *mgr)
+{
+ struct drm_dp_mst_port *port;+ int i;
+ int ret;
+ mutex_lock(&mgr->payload_lock);
+=09 for (i =3D 0; i < mgr->max_payloads; i++) {
+
+ if=20 (!mgr->proposed_vcpis[i])
+ continue;
+
+ port =3D=20 container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi);+
+ DRM_DEBUG_KMS("payload %d %d\n", i,=20 mgr->payloads[i].payload_state);
+ if=20 (mgr->payloads[i].payload_state =3D=3D DP_PAYLOAD_LOCAL) {
+ ret = =3D=20 drm_dp_create_payload_step2(mgr, port, i + 1, &mgr->payloads[i]);<= br>+ } else if (mgr->payloads[i].payload_state =3D=3D=20 DP_PAYLOAD_DELETE_LOCAL) {
+ ret =3D=20 drm_dp_destroy_payload_step2(mgr, i + 1, &mgr->payloads[i]);
+ }
+ if (ret) {
+ mutex_unlock(&mgr->payload_lock);
+ return ret;
+ }
+ }
+=09 mutex_unlock(&mgr->payload_lock);
+ return 0;
+}
+EXPORT_= SYMBOL(drm_dp_update_payload_part2);
+
+#if 0 /* unused as of yet */
+static int drm_dp_send_dpcd_read(struct=20 drm_dp_mst_topology_mgr *mgr,
+ struct drm_dp_mst_port *port,
+ int offset, int size)
+{
+ int len;
+ struct=20 drm_dp_sideband_msg_tx *txmsg;
+
+ txmsg =3D kzalloc(sizeof(*txmsg)= , GFP_KERNEL);
+ if (!txmsg)
+ return -ENOMEM;
+
+ len =3D=20 build_dpcd_read(txmsg, port->port_num, 0, 8);
+ txmsg->dst =3D=20 port->parent;
+
+ drm_dp_queue_down_tx(mgr, txmsg);
+
+=09 return 0;
+}
+#endif
+
+static int=20 drm_dp_send_dpcd_write(struct drm_dp_mst_topology_mgr *mgr,
+ =20 struct drm_dp_mst_port *port,
+ int offset, int size, u8 *bytes)<= br>+{
+ int len;
+ int ret;
+ struct drm_dp_sideband_msg_tx *txmsg;
+=09 struct drm_dp_mst_branch *mstb;
+
+ mstb =3D=20 drm_dp_get_validated_mstb_ref(mgr, port->parent);
+ if (!mstb)
+ return -EINVAL;
+
+ txmsg =3D kzalloc(sizeof(*txmsg), GFP_KERNEL)= ;
+ if (!txmsg) {
+ ret =3D -ENOMEM;
+ goto fail_put;
+ }
++ len =3D build_dpcd_write(txmsg, port->port_num, offset, size, bytes);=
+ txmsg->dst =3D mstb;
+
+ drm_dp_queue_down_tx(mgr, txmsg);
+=
+ ret =3D drm_dp_mst_wait_tx_reply(mstb, txmsg);
+ if (ret > 0) {+ if (txmsg->reply.reply_type =3D=3D 1) {
+ ret =3D -EINVAL;
+= }=20 else
+ ret =3D 0;
+ }
+ kfree(txmsg);
+fail_put:
+=09 drm_dp_put_mst_branch_device(mstb);
+ return ret;
+}
+
+stati= c int drm_dp_encode_up_ack_reply(struct drm_dp_sideband_msg_tx *msg, u8=20 req_type)
+{
+ struct drm_dp_sideband_msg_reply_body reply;
++ reply.reply_type =3D 1;
+ reply.req_type =3D req_type;
+=09 drm_dp_encode_sideband_reply(&reply, msg);
+ return 0;
+}
+<= br>+static int drm_dp_send_up_ack_reply(struct drm_dp_mst_topology_mgr *mgr,
+=09 struct drm_dp_mst_branch *mstb,
+ int req_type, int=20 seqno, bool broadcast)
+{
+ struct drm_dp_sideband_msg_tx *txmsg;+
+ txmsg =3D kzalloc(sizeof(*txmsg), GFP_KERNEL);
+ if (!txmsg)
+ =09 return -ENOMEM;
+
+ txmsg->dst =3D mstb;
+ txmsg->seqno =3D= =20 seqno;
+ drm_dp_encode_up_ack_reply(txmsg, req_type);
+
+=09 mutex_lock(&mgr->qlock);
+ list_add_tail(&txmsg->next,=20 &mgr->tx_msg_upq);
+ if (!mgr->tx_up_in_progress) {
+ =09 process_single_up_tx_qlock(mgr);
+ }
+=09 mutex_unlock(&mgr->qlock);
+ return 0;
+}
+
+static=20 int drm_dp_get_vc_payload_bw(int dp_link_bw, int dp_link_count)
+{
= + switch (dp_link_bw) {
+ case DP_LINK_BW_1_62:
+ return 3 *=20 dp_link_count;
+ case DP_LINK_BW_2_7:
+ return 5 * dp_link_count;<= br>+ case DP_LINK_BW_5_4:
+ return 10 * dp_link_count;
+ }
+=09 return 0;
+}
+
+/**
+ * drm_dp_mst_topology_mgr_set_mst() -=20 Set the MST state for a topology manager
+ * @mgr: manager to set=20 state for
+ * @mst_state: true to enable MST on this connector -=20 false to disable.
+ *
+ * This is called by the driver when it=20 detects an MST capable device plugged
+ * into a DP MST capable port, or when a DP MST capable device is unplugged.
+ */
+int=20 drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr,=20 bool mst_state)
+{
+ int ret =3D 0;
+ struct drm_dp_mst_branch=20 *mstb =3D NULL;
+
+ mutex_lock(&mgr->lock);
+ if=20 (mst_state =3D=3D mgr->mst_state)
+ goto out_unlock;
+
+=09 mgr->mst_state =3D mst_state;
+ /* set the device into MST mode */<= br>+ if (mst_state) {
+ WARN_ON(mgr->mst_primary);
+
+ /* get=20 dpcd info */
+ mutex_lock(&mgr->aux_lock);
+ ret =3D=20 drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd,=20 DP_RECEIVER_CAP_SIZE);
+ mutex_unlock(&mgr->aux_lock);
+ =09 if (ret !=3D DP_RECEIVER_CAP_SIZE) {
+ DRM_DEBUG_KMS("failed to read= =20 DPCD\n");
+ goto out_unlock;
+ }
+
+ mgr->pbn_div =3D=20 drm_dp_get_vc_payload_bw(mgr->dpcd[1], mgr->dpcd[2] &=20 DP_MAX_LANE_COUNT_MASK);
+ mgr->total_pbn =3D 2560;
+ =09 mgr->total_slots =3D DIV_ROUND_UP(mgr->total_pbn, mgr->pbn_div);=
+ mgr->avail_slots =3D mgr->total_slots;
+
+ /* add initial=20 branch device at LCT 1 */
+ mstb =3D drm_dp_add_mst_branch_device(1,=20 NULL);
+ if (mstb =3D=3D NULL) {
+ ret =3D -ENOMEM;
+ goto=20 out_unlock;
+ }
+ mstb->mgr =3D mgr;
+
+ /* give this=20 the main reference */
+ mgr->mst_primary =3D mstb;
+ =09 kref_get(&mgr->mst_primary->kref);
+
+ {
+ struct=20 drm_dp_payload reset_pay;
+ reset_pay.start_slot =3D 0;
+ =09 reset_pay.num_slots =3D 0x3f;
+ drm_dp_dpcd_write_payload(mgr, 0,=20 &reset_pay);
+ }
+
+ mutex_lock(&mgr->aux_lock);+ ret =3D drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL,
+ =20 DP_MST_EN | DP_UP_REQ_EN | DP_UPSTREAM_IS_SRC);
+ =09 mutex_unlock(&mgr->aux_lock);
+ if (ret < 0) {
+ goto out_unlock;
+ }
+
+
+ /* sort out guid */
+ =09 mutex_lock(&mgr->aux_lock);
+ ret =3D=20 drm_dp_dpcd_read(mgr->aux, DP_GUID, mgr->guid, 16);
+ =09 mutex_unlock(&mgr->aux_lock);
+ if (ret !=3D 16) {
+ =09 DRM_DEBUG_KMS("failed to read DP GUID %d\n", ret);
+ goto=20 out_unlock;
+ }
+
+ mgr->guid_valid =3D=20 drm_dp_validate_guid(mgr, mgr->guid);
+ if (!mgr->guid_valid) {=
+ ret =3D drm_dp_dpcd_write(mgr->aux, DP_GUID, mgr->guid, 16);
= + mgr->guid_valid =3D true;
+ }
+
+ =09 queue_work(system_long_wq, &mgr->work);
+
+ ret =3D 0;
+= } else {
+ /* disable MST on the device */
+ mstb =3D=20 mgr->mst_primary;
+ mgr->mst_primary =3D NULL;
+ /* this ca= n fail if the device is gone */
+ drm_dp_dpcd_writeb(mgr->aux,=20 DP_MSTM_CTRL, 0);
+ ret =3D 0;
+ memset(mgr->payloads, 0,=20 mgr->max_payloads * sizeof(struct drm_dp_payload));
+ =09 mgr->payload_mask =3D 0;
+ set_bit(0, &mgr->payload_mask);<= br>+ }
+
+out_unlock:
+ mutex_unlock(&mgr->lock);
+ if=20 (mstb)
+ drm_dp_put_mst_branch_device(mstb);
+ return ret;
++}
+EXPORT_SYMBOL(drm_dp_mst_topology_mgr_set_mst);
+
+/**
+ * drm_dp_mst_topology_mgr_suspend() - suspend the MST manager
+ *=20 @mgr: manager to suspend
+ *
+ * This function tells the MST=20 device that we can't handle UP messages
+ * anymore. This should stop it from sending any since we are suspended.
+ */
+void=20 drm_dp_mst_topology_mgr_suspend(struct drm_dp_mst_topology_mgr *mgr)
+= {
+ mutex_lock(&mgr->lock);
+ mutex_lock(&mgr->aux_lock);+ drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL,
+ DP_MST_EN |=20 DP_UPSTREAM_IS_SRC);
+ mutex_unlock(&mgr->aux_lock);
+=09 mutex_unlock(&mgr->lock);
+}
+EXPORT_SYMBOL(drm_dp_mst_topol= ogy_mgr_suspend);
+
+/**
+ * drm_dp_mst_topology_mgr_resume() - resume the MST manager
+ *=20 @mgr: manager to resume
+ *
+ * This will fetch DPCD and see if=20 the device is still there,
+ * if it is, it will rewrite the MSTM=20 control bits, and return.
+ *
+ * if the device fails this returns -1, and the driver should do
+ * a full MST reprobe, in case we were undocked.
+ */
+int drm_dp_mst_topology_mgr_resume(struct=20 drm_dp_mst_topology_mgr *mgr)
+{
+ int ret =3D 0;
+
+=09 mutex_lock(&mgr->lock);
+
+ if (mgr->mst_primary) {
+ int sret;
+ mutex_lock(&mgr->aux_lock);
+ sret =3D=20 drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd,=20 DP_RECEIVER_CAP_SIZE);
+ mutex_unlock(&mgr->aux_lock);
+ =09 if (sret !=3D DP_RECEIVER_CAP_SIZE) {
+ DRM_DEBUG_KMS("dpcd read=20 failed - undocked during suspend?\n");
+ ret =3D -1;
+ goto=20 out_unlock;
+ }
+
+ mutex_lock(&mgr->aux_lock);
+ =09 ret =3D drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL,
+ DP_MST_EN= | DP_UP_REQ_EN | DP_UPSTREAM_IS_SRC);
+ =09 mutex_unlock(&mgr->aux_lock);
+ if (ret < 0) {
+ =09 DRM_DEBUG_KMS("mst write failed - undocked during suspend?\n");
+ =09 ret =3D -1;
+ goto out_unlock;
+ }
+ ret =3D 0;
+ } else<= br>+ ret =3D -1;
+
+out_unlock:
+ mutex_unlock(&mgr->lock);<= br>+ return ret;
+}
+EXPORT_SYMBOL(drm_dp_mst_topology_mgr_resume);
= +
+static void drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool=20 up)
+{
+ int len;
+ u8 replyblock[32];
+ int replylen,=20 origlen, curreply;
+ int ret;
+ struct drm_dp_sideband_msg_rx=20 *msg;
+ int basereg =3D up ? DP_SIDEBAND_MSG_UP_REQ_BASE :=20 DP_SIDEBAND_MSG_DOWN_REP_BASE;
+ msg =3D up ? &mgr->up_req_recv= : &mgr->down_rep_recv;
+
+ len =3D=20 min(mgr->max_dpcd_transaction_bytes, 16);
+=09 mutex_lock(&mgr->aux_lock);
+ ret =3D=20 drm_dp_dpcd_read(mgr->aux, basereg,
+ replyblock, len);+ mutex_unlock(&mgr->aux_lock);
+ if (ret !=3D len) {
+ =09 DRM_DEBUG_KMS("failed to read DPCD down rep %d %d\n", len, ret);
+ =09 return;
+ }
+ ret =3D drm_dp_sideband_msg_build(msg, replyblock,=20 len, true);
+ if (!ret) {
+ DRM_DEBUG_KMS("sideband msg build=20 failed %d\n", replyblock[0]);
+ return;
+ }
+ replylen =3D=20 msg->curchunk_len + msg->curchunk_hdrlen;
+
+ origlen =3D=20 replylen;
+ replylen -=3D len;
+ curreply =3D len;
+ while=20 (replylen > 0) {
+ len =3D min3(replylen,=20 mgr->max_dpcd_transaction_bytes, 16);
+ =09 mutex_lock(&mgr->aux_lock);
+ ret =3D=20 drm_dp_dpcd_read(mgr->aux, basereg + curreply,
+ =20 replyblock, len);
+ mutex_unlock(&mgr->aux_lock);
+ if=20 (ret !=3D len) {
+ DRM_DEBUG_KMS("failed to read a chunk\n");
+ = }
+ ret =3D drm_dp_sideband_msg_build(msg, replyblock, len, false);
+ i= f (ret =3D=3D false)
+ DRM_DEBUG_KMS("failed to build sideband msg\n"= );
+ curreply +=3D len;
+ replylen -=3D len;
+ }
+}
+
+stati= c=20 int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
+{=
+ int ret =3D 0;
+
+ drm_dp_get_one_sb_msg(mgr, false);
+
+ if= =20 (mgr->down_rep_recv.have_eomt) {
+ struct drm_dp_sideband_msg_tx=20 *txmsg;
+ struct drm_dp_mst_branch *mstb;
+ int slot =3D -1;
+= =09 mstb =3D drm_dp_get_mst_branch_device(mgr,
+ =20 mgr->down_rep_recv.initial_hdr.lct,
+ =20 mgr->down_rep_recv.initial_hdr.rad);
+
+ if (!mstb) {
+ =09 DRM_DEBUG_KMS("Got MST reply from unknown device %d\n",=20 mgr->down_rep_recv.initial_hdr.lct);
+ =09 memset(&mgr->down_rep_recv, 0, sizeof(struct=20 drm_dp_sideband_msg_rx));
+ return 0;
+ }
+
+ /* find=20 the message */
+ slot =3D mgr->down_rep_recv.initial_hdr.seqno;+ mutex_lock(&mgr->qlock);
+ txmsg =3D mstb->tx_slots[slot]= ;
+ /* remove from slots */
+ mutex_unlock(&mgr->qlock);
++ if (!txmsg) {
+ DRM_DEBUG_KMS("Got MST reply with no msg %p %d %d %02x %02x\n",
+ mstb,
+ =20 mgr->down_rep_recv.initial_hdr.seqno,
+ =20 mgr->down_rep_recv.initial_hdr.lct,
+ =20 mgr->down_rep_recv.initial_hdr.rad[0],
+ =20 mgr->down_rep_recv.msg[0]);
+ =09 drm_dp_put_mst_branch_device(mstb);
+ =09 memset(&mgr->down_rep_recv, 0, sizeof(struct=20 drm_dp_sideband_msg_rx));
+ return 0;
+ }
+
+ =09 drm_dp_sideband_parse_reply(&mgr->down_rep_recv,=20 &txmsg->reply);
+ if (txmsg->reply.reply_type =3D=3D 1) {+ DRM_DEBUG_KMS("Got NAK reply: req 0x%02x, reason 0x%02x, nak data=20 0x%02x\n", txmsg->reply.req_type, txmsg->reply.u.nak.reason,=20 txmsg->reply.u.nak.nak_data);
+ }
+
+ =09 memset(&mgr->down_rep_recv, 0, sizeof(struct=20 drm_dp_sideband_msg_rx));
+ drm_dp_put_mst_branch_device(mstb);
+<= br>+ mutex_lock(&mgr->qlock);
+ txmsg->state =3D=20 DRM_DP_SIDEBAND_TX_RX;
+ mstb->tx_slots[slot] =3D NULL;
+ =09 mutex_unlock(&mgr->qlock);
+
+ =09 wake_up(&mgr->tx_waitq);
+ }
+ return ret;
+}
+
+st= atic int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+{<= br>+ int ret =3D 0;
+ drm_dp_get_one_sb_msg(mgr, true);
+
+ if=20 (mgr->up_req_recv.have_eomt) {
+ struct=20 drm_dp_sideband_msg_req_body msg;
+ struct drm_dp_mst_branch *mstb;+ bool seqno;
+ mstb =3D drm_dp_get_mst_branch_device(mgr,
+ = =20 mgr->up_req_recv.initial_hdr.lct,
+ =20 mgr->up_req_recv.initial_hdr.rad);
+ if (!mstb) {
+ =09 DRM_DEBUG_KMS("Got MST reply from unknown device %d\n",=20 mgr->up_req_recv.initial_hdr.lct);
+ =09 memset(&mgr->up_req_recv, 0, sizeof(struct=20 drm_dp_sideband_msg_rx));
+ return 0;
+ }
+
+ seqno =3D=20 mgr->up_req_recv.initial_hdr.seqno;
+ =09 drm_dp_sideband_parse_req(&mgr->up_req_recv, &msg);
+
+ if (msg.req_type =3D=3D DP_CONNECTION_STATUS_NOTIFY) {
+ =09 drm_dp_send_up_ack_reply(mgr, mstb, msg.req_type, seqno, false);
+ =09 drm_dp_update_port(mstb, &msg.u.conn_stat);
+ =09 DRM_DEBUG_KMS("Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt:=20 %d\n", msg.u.conn_stat.port_number,=20 msg.u.conn_stat.legacy_device_plug_status,=20 msg.u.conn_stat.displayport_device_plug_status,=20 msg.u.conn_stat.message_capability_status, msg.u.conn_stat.input_port,=20 msg.u.conn_stat.peer_device_type);
+ =09 (*mgr->cbs->hotplug)(mgr);
+
+ } else if (msg.req_type =3D=3D= =20 DP_RESOURCE_STATUS_NOTIFY) {
+ drm_dp_send_up_ack_reply(mgr, mstb,=20 msg.req_type, seqno, false);
+ DRM_DEBUG_KMS("Got RSN: pn: %d=20 avail_pbn %d\n", msg.u.resource_stat.port_number,=20 msg.u.resource_stat.available_pbn);
+ }
+
+ =09 drm_dp_put_mst_branch_device(mstb);
+ =09 memset(&mgr->up_req_recv, 0, sizeof(struct=20 drm_dp_sideband_msg_rx));
+ }
+ return ret;
+}
+
+/**
+ * drm_dp_mst_hpd_irq() - MST hotplug IRQ notify
+ * @mgr: manager to notify irq for.
+ * @esi: 4 bytes from SINK_COUNT_ESI
+ *
+ *=20 This should be called from the driver when it detects a short IRQ,
+ * along with the value of the DEVICE_SERVICE_IRQ_VECTOR_ESI0. The
+ *=20 topology manager will process the sideband messages received as a result<= br>+ * of this.
+ */
+int drm_dp_mst_hpd_irq(struct=20 drm_dp_mst_topology_mgr *mgr, u8 *esi, bool *handled)
+{
+ int ret =3D 0;
+ int sc;
+ *handled =3D false;
+ sc =3D esi[0] & 0x= 3f;
+ if (sc !=3D mgr->sink_count) {
+
+ if (mgr->mst_primary=20 && mgr->sink_count =3D=3D 0 && sc) {
+ =09 mgr->mst_primary->link_address_sent =3D false;
+ =09 queue_work(system_long_wq, &mgr->work);
+ }
+ =09 mgr->sink_count =3D sc;
+ *handled =3D true;
+
+ }
+
+= if (esi[1] & DP_DOWN_REP_MSG_RDY) {
+ ret =3D=20 drm_dp_mst_handle_down_rep(mgr);
+ *handled =3D true;
+ }
+
= + if (esi[1] & DP_UP_REQ_MSG_RDY) {
+ ret |=3D=20 drm_dp_mst_handle_up_req(mgr);
+ *handled =3D true;
+ }
+
+=09 drm_dp_mst_kick_tx(mgr);
+ return ret;
+}
+EXPORT_SYMBOL(drm_dp_= mst_hpd_irq);
+
+/**
+ * drm_dp_mst_detect_port() - get connection status for an MST port
+ * @mgr: manager for this port
+ * @port: unverified pointer to a=20 port
+ *
+ * This returns the current connection state for a port. It validates the
+ * port pointer still exists so the caller doesn't require a reference
+ */
+enum drm_connector_status=20 drm_dp_mst_detect_port(struct drm_dp_mst_topology_mgr *mgr, struct=20 drm_dp_mst_port *port)
+{
+ enum drm_connector_status status =3D=20 connector_status_disconnected;
+
+ /* we need to search for the=20 port in the mgr in case its gone */
+ port =3D=20 drm_dp_get_validated_port_ref(mgr, port);
+ if (!port)
+ return=20 connector_status_disconnected;
+
+ if (!port->ddps)
+ goto=20 out;
+
+ switch (port->pdt) {
+ case DP_PEER_DEVICE_NONE:
= + case DP_PEER_DEVICE_MST_BRANCHING:
+ break;
+
+ case=20 DP_PEER_DEVICE_SST_SINK:
+ status =3D connector_status_connected;
= + break;
+ case DP_PEER_DEVICE_DP_LEGACY_CONV:
+ if=20 (port->ldps)
+ status =3D connector_status_connected;
+ break= ;
+ }
+out:
+ drm_dp_put_port(port);
+ return status;
+}
+EXP= ORT_SYMBOL(drm_dp_mst_detect_port);
+
+/**
+ * drm_dp_mst_get_edid() - get EDID for an MST port
+ * @connector:=20 toplevel connector to get EDID for
+ * @mgr: manager for this port
= + * @port: unverified pointer to a port.
+ *
+ * This returns an=20 EDID for the port connected to a connector,
+ * It validates the=20 pointer still exists so the caller doesn't require a
+ * reference.+ */
+struct edid *drm_dp_mst_get_edid(struct drm_connector=20 *connector, struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port=20 *port)
+{
+ struct edid *edid =3D NULL;
+
+ /* we need to=20 search for the port in the mgr in case its gone */
+ port =3D=20 drm_dp_get_validated_port_ref(mgr, port);
+ if (!port)
+ return=20 NULL;
+
+ edid =3D drm_get_edid(connector, &port->aux.ddc);<= br>+ drm_dp_put_port(port);
+ return edid;
+}
+EXPORT_SYMBOL(drm_dp_= mst_get_edid);
+
+/**
+ * drm_dp_find_vcpi_slots() - find slots for this PBN value
+ * @mgr: manager to use
+ * @pbn: payload bandwidth to convert into slots.
= + */
+int drm_dp_find_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr,+ int pbn)
+{
+ int num_slots;
+
+ num_slots =3D=20 DIV_ROUND_UP(pbn, mgr->pbn_div);
+
+ if (num_slots >=20 mgr->avail_slots)
+ return -ENOSPC;
+ return num_slots;
+}+EXPORT_SYMBOL(drm_dp_find_vcpi_slots);
+
+static int drm_dp_init_vcpi(struct drm_dp_mst_topology_mgr *mgr,
+ =20 struct drm_dp_vcpi *vcpi, int pbn)
+{
+ int num_slots;
+ int=20 ret;
+
+ num_slots =3D DIV_ROUND_UP(pbn, mgr->pbn_div);
+
= + if (num_slots > mgr->avail_slots)
+ return -ENOSPC;
+
+ vcpi->pbn =3D pbn;
+ vcpi->aligned_pbn =3D num_slots *=20 mgr->pbn_div;
+ vcpi->num_slots =3D num_slots;
+
+ ret =3D= =20 drm_dp_mst_assign_payload_id(mgr, vcpi);
+ if (ret < 0)
+ =09 return ret;
+ return 0;
+}
+
+/**
+ *=20 drm_dp_mst_allocate_vcpi() - Allocate a virtual channel
+ * @mgr:=20 manager for this port
+ * @port: port to allocate a virtual channel=20 for.
+ * @pbn: payload bandwidth number to request
+ * @slots:=20 returned number of slots for this PBN.
+ */
+bool=20 drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, struct=20 drm_dp_mst_port *port, int pbn, int *slots)
+{
+ int ret;
+
+ port =3D drm_dp_get_validated_port_ref(mgr, port);
+ if (!port)
+ = =09 return false;
+
+ if (port->vcpi.vcpi > 0) {
+ =09 DRM_DEBUG_KMS("payload: vcpi %d already allocated for pbn %d - requested pbn %d\n", port->vcpi.vcpi, port->vcpi.pbn, pbn);
+ if (pbn=20 =3D=3D port->vcpi.pbn) {
+ *slots =3D port->vcpi.num_slots;+ =09 return true;
+ }
+ }
+
+ ret =3D drm_dp_init_vcpi(mgr,=20 &port->vcpi, pbn);
+ if (ret) {
+ DRM_DEBUG_KMS("failed to init vcpi %d %d %d\n", DIV_ROUND_UP(pbn, mgr->pbn_div),=20 mgr->avail_slots, ret);
+ goto out;
+ }
+=09 DRM_DEBUG_KMS("initing vcpi for %d %d\n", pbn, port->vcpi.num_slots);<= br>+ *slots =3D port->vcpi.num_slots;
+
+ drm_dp_put_port(port);
= + return true;
+out:
+ return false;
+}
+EXPORT_SYMBOL(drm_dp_= mst_allocate_vcpi);
+
+/**
+ * drm_dp_mst_reset_vcpi_slots() - Reset number of slots to 0 for VCPI+ * @mgr: manager for this port
+ * @port: unverified pointer to a=20 port.
+ *
+ * This just resets the number of slots for the ports=20 VCPI for later programming.
+ */
+void=20 drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct=20 drm_dp_mst_port *port)
+{
+ port =3D=20 drm_dp_get_validated_port_ref(mgr, port);
+ if (!port)
+ return;+ port->vcpi.num_slots =3D 0;
+ drm_dp_put_port(port);
+}
+EXP= ORT_SYMBOL(drm_dp_mst_reset_vcpi_slots);
+
+/**
+ * drm_dp_mst_deallocate_vcpi() - deallocate a VCPI
+ * @mgr: manager for this port
+ * @port: unverified port to deallocate vcpi for
+ */
+void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr=20 *mgr, struct drm_dp_mst_port *port)
+{
+ port =3D=20 drm_dp_get_validated_port_ref(mgr, port);
+ if (!port)
+ return;+
+ drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi);
+=09 port->vcpi.num_slots =3D 0;
+ port->vcpi.pbn =3D 0;
+=09 port->vcpi.aligned_pbn =3D 0;
+ port->vcpi.vcpi =3D 0;
+=09 drm_dp_put_port(port);
+}
+EXPORT_SYMBOL(drm_dp_mst_deallocate_vcpi= );
+
+static int drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *mgr,
+ int id, struct drm_dp_payload *payload)
+{
+ u8=20 payload_alloc[3], status;
+ int ret;
+ int retries =3D 0;
+
+= =09 mutex_lock(&mgr->aux_lock);
+ drm_dp_dpcd_writeb(mgr->aux,=20 DP_PAYLOAD_TABLE_UPDATE_STATUS,
+ DP_PAYLOAD_TABLE_UPDATED);
+ mutex_unlock(&mgr->aux_lock);
+
+ payload_alloc[0] =3D id;<= br>+ payload_alloc[1] =3D payload->start_slot;
+ payload_alloc[2] =3D=20 payload->num_slots;
+
+ mutex_lock(&mgr->aux_lock);
+ ret =3D drm_dp_dpcd_write(mgr->aux, DP_PAYLOAD_ALLOCATE_SET,=20 payload_alloc, 3);
+ mutex_unlock(&mgr->aux_lock);
+ if=20 (ret !=3D 3) {
+ DRM_DEBUG_KMS("failed to write payload allocation=20 %d\n", ret);
+ goto fail;
+ }
+
+retry:
+=09 mutex_lock(&mgr->aux_lock);
+ ret =3D=20 drm_dp_dpcd_readb(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS,=20 &status);
+ mutex_unlock(&mgr->aux_lock);
+ if (ret=20 < 0) {
+ DRM_DEBUG_KMS("failed to read payload table status=20 %d\n", ret);
+ goto fail;
+ }
+
+ if (!(status &=20 DP_PAYLOAD_TABLE_UPDATED)) {
+ retries++;
+ if (retries < 20) {
+ usleep_range(10000, 20000);
+ goto retry;
+ }
+ =09 DRM_DEBUG_KMS("status not set after read payload table status %d\n",=20 status);
+ ret =3D -EINVAL;
+ goto fail;
+ }
+ ret =3D 0;+fail:
+ return ret;
+}
+
+
+/**
+ * drm_dp_check_act_status() -=20 Check ACT handled status.
+ * @mgr: manager to use
+ *
+ *=20 Check the payload status bits in the DPCD for ACT handled completion.
= + */
+int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr)<= br>+{
+ u8 status;
+ int ret;
+ int count =3D 0;
+
+ do {
+ =09 mutex_lock(&mgr->aux_lock);
+ ret =3D=20 drm_dp_dpcd_readb(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS,=20 &status);
+ mutex_unlock(&mgr->aux_lock);
+
+ if=20 (ret < 0) {
+ DRM_DEBUG_KMS("failed to read payload table status %d\n", ret);
+ goto fail;
+ }
+
+ if (status &=20 DP_PAYLOAD_ACT_HANDLED)
+ break;
+ count++;
+ udelay(100);+
+ } while (count < 30);
+
+ if (!(status &=20 DP_PAYLOAD_ACT_HANDLED)) {
+ DRM_DEBUG_KMS("failed to get ACT bit %d after %d retries\n", status, count);
+ ret =3D -EINVAL;
+ goto=20 fail;
+ }
+ return 0;
+fail:
+ return ret;
+}
+EXPORT_S= YMBOL(drm_dp_check_act_status);
+
+/**
+ * drm_dp_calc_pbn_mode() - Calculate the PBN for a mode.
+ * @clock: dot clock for the mode
+ * @bpp: bpp for the mode.
+ *
+ *=20 This uses the formula in the spec to calculate the PBN value for a mode.<= br>+ */
+int drm_dp_calc_pbn_mode(int clock, int bpp)
+{
+=09 fixed20_12 pix_bw;
+ fixed20_12 fbpp;
+ fixed20_12 result;
+=09 fixed20_12 margin, tmp;
+ u32 res;
+
+ pix_bw.full =3D=20 dfixed_const(clock);
+ fbpp.full =3D dfixed_const(bpp);
+ tmp.full = =3D dfixed_const(8);
+ fbpp.full =3D dfixed_div(fbpp, tmp);
+
+=09 result.full =3D dfixed_mul(pix_bw, fbpp);
+ margin.full =3D=20 dfixed_const(54);
+ tmp.full =3D dfixed_const(64);
+ margin.full =3D= =20 dfixed_div(margin, tmp);
+ result.full =3D dfixed_div(result, margin);=
+
+ margin.full =3D dfixed_const(1006);
+ tmp.full =3D dfixed_const(1000)= ;
+ margin.full =3D dfixed_div(margin, tmp);
+ result.full =3D=20 dfixed_mul(result, margin);
+
+ result.full =3D dfixed_div(result,=20 tmp);
+ result.full =3D dfixed_ceil(result);
+ res =3D=20 dfixed_trunc(result);
+ return res;
+}
+EXPORT_SYMBOL(drm_dp_cal= c_pbn_mode);
+
+static int test_calc_pbn_mode(void)
+{
+ int ret;
+ ret =3D=20 drm_dp_calc_pbn_mode(154000, 30);
+ if (ret !=3D 689)
+ return=20 -EINVAL;
+ ret =3D drm_dp_calc_pbn_mode(234000, 30);
+ if (ret !=3D= =20 1047)
+ return -EINVAL;
+ return 0;
+}
+
+/* we want to=20 kick the TX after we've ack the up/down IRQs. */
+static void=20 drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr)
+{
+=09 queue_work(system_long_wq, &mgr->tx_work);
+}
+
+static=20 void drm_dp_mst_dump_mstb(struct seq_file *m,
+ struct=20 drm_dp_mst_branch *mstb)
+{
+ struct drm_dp_mst_port *port;
+=09 int tabs =3D mstb->lct;
+ char prefix[10];
+ int i;
+
+ fo= r (i =3D 0; i < tabs; i++)
+ prefix[i] =3D '\t';
+ prefix[i] =3D= =20 '\0';
+
+ seq_printf(m, "%smst: %p, %d\n", prefix, mstb,=20 mstb->num_ports);
+ list_for_each_entry(port, &mstb->ports, next) {
+ seq_printf(m, "%sport: %d: ddps: %d ldps: %d, %p, conn:=20 %p\n", prefix, port->port_num, port->ddps, port->ldps, port,=20 port->connector);
+ if (port->mstb)
+ =09 drm_dp_mst_dump_mstb(m, port->mstb);
+ }
+}
+
+static=20 bool dump_dp_payload_table(struct drm_dp_mst_topology_mgr *mgr,
+ =09 char *buf)
+{
+ int ret;
+ int i;
+=09 mutex_lock(&mgr->aux_lock);
+ for (i =3D 0; i < 4; i++) {+ ret =3D drm_dp_dpcd_read(mgr->aux, DP_PAYLOAD_TABLE_UPDATE_STATUS +=20 (i * 16), &buf[i * 16], 16);
+ if (ret !=3D 16)
+ break;
= + }
+ mutex_unlock(&mgr->aux_lock);
+ if (i =3D=3D 4)
+ =09 return true;
+ return false;
+}
+
+/**
+ *=20 drm_dp_mst_dump_topology(): dump topology to seq file.
+ * @m:=20 seq_file to dump output to
+ * @mgr: manager to dump current topology for.
+ *
+ * helper to dump MST topology to a seq file for=20 debugfs.
+ */
+void drm_dp_mst_dump_topology(struct seq_file *m,+ struct drm_dp_mst_topology_mgr *mgr)
+{
+ int i;
+=09 struct drm_dp_mst_port *port;
+ mutex_lock(&mgr->lock);
+=09 if (mgr->mst_primary)
+ drm_dp_mst_dump_mstb(m,=20 mgr->mst_primary);
+
+ /* dump VCPIs */
+=09 mutex_unlock(&mgr->lock);
+
+=09 mutex_lock(&mgr->payload_lock);
+ seq_printf(m, "vcpi: %lx\n", mgr->payload_mask);
+
+ for (i =3D 0; i <=20 mgr->max_payloads; i++) {
+ if (mgr->proposed_vcpis[i]) {
+ port =3D container_of(mgr->proposed_vcpis[i], struct=20 drm_dp_mst_port, vcpi);
+ seq_printf(m, "vcpi %d: %d %d %d\n", i,=20 port->port_num, port->vcpi.vcpi, port->vcpi.num_slots);
+ } else
+ seq_printf(m, "vcpi %d:unsed\n", i);
+ }
+ for (i =3D=20 0; i < mgr->max_payloads; i++) {
+ seq_printf(m, "payload %d:=20 %d, %d, %d\n",
+ i,
+ mgr->payloads[i].payload_state,<= br>+ mgr->payloads[i].start_slot,
+ =20 mgr->payloads[i].num_slots);
+
+
+ }
+=09 mutex_unlock(&mgr->payload_lock);
+
+=09 mutex_lock(&mgr->lock);
+ if (mgr->mst_primary) {
+ u8=20 buf[64];
+ bool bret;
+ int ret;
+ ret =3D=20 drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, buf, DP_RECEIVER_CAP_SIZE);+ seq_printf(m, "dpcd: ");
+ for (i =3D 0; i <=20 DP_RECEIVER_CAP_SIZE; i++)
+ seq_printf(m, "%02x ", buf[i]);
+ =09 seq_printf(m, "\n");
+ ret =3D drm_dp_dpcd_read(mgr->aux,=20 DP_FAUX_CAP, buf, 2);
+ seq_printf(m, "faux/mst: ");
+ for (i =3D= =20 0; i < 2; i++)
+ seq_printf(m, "%02x ", buf[i]);
+ =09 seq_printf(m, "\n");
+ ret =3D drm_dp_dpcd_read(mgr->aux,=20 DP_MSTM_CTRL, buf, 1);
+ seq_printf(m, "mst ctrl: ");
+ for (i =3D 0; i < 1; i++)
+ seq_printf(m, "%02x ", buf[i]);
+ =09 seq_printf(m, "\n");
+
+ bret =3D dump_dp_payload_table(mgr, buf);=
+ if (bret =3D=3D true) {
+ seq_printf(m, "payload table: ");
+ = =09 for (i =3D 0; i < 63; i++)
+ seq_printf(m, "%02x ", buf[i]);
= + seq_printf(m, "\n");
+ }
+
+ }
+
+=09 mutex_unlock(&mgr->lock);
+
+}
+EXPORT_SYMBOL(drm_dp_mst_= dump_topology);
+
+static void drm_dp_tx_work(struct work_struct *work)
+{
+ struct=20 drm_dp_mst_topology_mgr *mgr =3D container_of(work, struct=20 drm_dp_mst_topology_mgr, tx_work);
+
+=09 mutex_lock(&mgr->qlock);
+ if (mgr->tx_down_in_progress)
= + process_single_down_tx_qlock(mgr);
+=09 mutex_unlock(&mgr->qlock);
+}
+
+/**
+ *=20 drm_dp_mst_topology_mgr_init - initialise a topology manager
+ *=20 @mgr: manager struct to initialise
+ * @dev: device providing this=20 structure - for i2c addition.
+ * @aux: DP helper aux channel to talk to this device
+ * @max_dpcd_transaction_bytes: hw specific DPCD=20 transaction limit
+ * @max_payloads: maximum number of payloads this=20 GPU can source
+ * @conn_base_id: the connector object ID the MST=20 device is connected to.
+ *
+ * Return 0 for success, or negative=20 error code on failure
+ */
+int=20 drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
+ =09 struct device *dev, struct drm_dp_aux *aux,
+ int=20 max_dpcd_transaction_bytes,
+ int max_payloads, int conn_base_id)<= br>+{
+ mutex_init(&mgr->lock);
+ mutex_init(&mgr->qlock);
+ mutex_init(&mgr->aux_lock);
+=09 mutex_init(&mgr->payload_lock);
+=09 INIT_LIST_HEAD(&mgr->tx_msg_upq);
+=09 INIT_LIST_HEAD(&mgr->tx_msg_downq);
+=09 INIT_WORK(&mgr->work, drm_dp_mst_link_probe_work);
+=09 INIT_WORK(&mgr->tx_work, drm_dp_tx_work);
+=09 init_waitqueue_head(&mgr->tx_waitq);
+ mgr->dev =3D dev;
= + mgr->aux =3D aux;
+ mgr->max_dpcd_transaction_bytes =3D=20 max_dpcd_transaction_bytes;
+ mgr->max_payloads =3D max_payloads;+ mgr->conn_base_id =3D conn_base_id;
+ mgr->payloads =3D=20 kcalloc(max_payloads, sizeof(struct drm_dp_payload), GFP_KERNEL);
+=09 if (!mgr->payloads)
+ return -ENOMEM;
+ mgr->proposed_vcpis =3D kcalloc(max_payloads, sizeof(struct drm_dp_vcpi *), GFP_KERNEL);
= + if (!mgr->proposed_vcpis)
+ return -ENOMEM;
+ set_bit(0,=20 &mgr->payload_mask);
+ test_calc_pbn_mode();
+ return 0;
= +}
+EXPORT_SYMBOL(drm_dp_mst_topology_mgr_init);
+
+/**
+ * drm_dp_mst_topology_mgr_destroy() - destroy topology manager.
+ *=20 @mgr: manager to destroy
+ */
+void=20 drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr)
+= {
+ mutex_lock(&mgr->payload_lock);
+ kfree(mgr->payloads);
= + mgr->payloads =3D NULL;
+ kfree(mgr->proposed_vcpis);
+=09 mgr->proposed_vcpis =3D NULL;
+=09 mutex_unlock(&mgr->payload_lock);
+ mgr->dev =3D NULL;
+=09 mgr->aux =3D NULL;
+}
+EXPORT_SYMBOL(drm_dp_mst_topology_mgr_des= troy);
+
+/* I2C device */
+static int drm_dp_mst_i2c_xfer(struct i2c_adapter=20 *adapter, struct i2c_msg *msgs,
+ int num)
+{
+ struct drm_dp_aux *aux =3D adapter->algo_data;
+ struct drm_dp_mst_port=20 *port =3D container_of(aux, struct drm_dp_mst_port, aux);
+ struct=20 drm_dp_mst_branch *mstb;
+ struct drm_dp_mst_topology_mgr *mgr =3D=20 port->mgr;
+ unsigned int i;
+ bool reading =3D false;
+=09 struct drm_dp_sideband_msg_req_body msg;
+ struct=20 drm_dp_sideband_msg_tx *txmsg =3D NULL;
+ int ret;
+
+ mstb =3D=20 drm_dp_get_validated_mstb_ref(mgr, port->parent);
+ if (!mstb)
+ return -EREMOTEIO;
+
+ /* construct i2c msg */
+ /* see if=20 last msg is a read */
+ if (msgs[num - 1].flags & I2C_M_RD)
+=09 reading =3D true;
+
+ if (!reading) {
+ =09 DRM_DEBUG_KMS("Unsupported I2C transaction for MST device\n");
+ ret =3D -EIO;
+ goto out;
+ }
+
+ msg.req_type =3D=20 DP_REMOTE_I2C_READ;
+ msg.u.i2c_read.num_transactions =3D num - 1;
= + msg.u.i2c_read.port_number =3D port->port_num;
+ for (i =3D 0; i &= lt; num - 1; i++) {
+ msg.u.i2c_read.transactions[i].i2c_dev_id =3D=20 msgs[i].addr;
+ msg.u.i2c_read.transactions[i].num_bytes =3D=20 msgs[i].len;
+ memcpy(&msg.u.i2c_read.transactions[i].bytes,=20 msgs[i].buf, msgs[i].len);
+ }
+ msg.u.i2c_read.read_i2c_device_id =3D msgs[num - 1].addr;
+ msg.u.i2c_read.num_bytes_read =3D msgs[num = -=20 1].len;
+
+ txmsg =3D kzalloc(sizeof(*txmsg), GFP_KERNEL);
+ if=20 (!txmsg) {
+ ret =3D -ENOMEM;
+ goto out;
+ }
+
+=09 txmsg->dst =3D mstb;
+ drm_dp_encode_sideband_req(&msg, txmsg);=
+
+ drm_dp_queue_down_tx(mgr, txmsg);
+
+ ret =3D=20 drm_dp_mst_wait_tx_reply(mstb, txmsg);
+ if (ret > 0) {
+
+=09 if (txmsg->reply.reply_type =3D=3D 1) { /* got a NAK back */
+ r= et =3D -EREMOTEIO;
+ goto out;
+ }
+ if=20 (txmsg->reply.u.remote_i2c_read_ack.num_bytes !=3D msgs[num - 1].len) = {
+ ret =3D -EIO;
+ goto out;
+ }
+ memcpy(msgs[num - 1].buf,= =20 txmsg->reply.u.remote_i2c_read_ack.bytes, msgs[num - 1].len);
+ =09 ret =3D num;
+ }
+out:
+ kfree(txmsg);
+=09 drm_dp_put_mst_branch_device(mstb);
+ return ret;
+}
+
+stati= c u32 drm_dp_mst_i2c_functionality(struct i2c_adapter *adapter)
+{
+ return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL |
+ =20 I2C_FUNC_SMBUS_READ_BLOCK_DATA |
+ =20 I2C_FUNC_SMBUS_BLOCK_PROC_CALL |
+ I2C_FUNC_10BIT_ADDR;
+}+
+static const struct i2c_algorithm drm_dp_mst_i2c_algo =3D {
+ .functionality= =3D drm_dp_mst_i2c_functionality,
+ .master_xfer =3D drm_dp_mst_i2c_xfer,=
+};
+
+/**
+ * drm_dp_mst_register_i2c_bus() - register an I2C adapter for=20 I2C-over-AUX
+ * @aux: DisplayPort AUX channel
+ *
+ * Returns 0 on success or a negative error code on failure.
+ */
+static int=20 drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux)
+{
+=09 aux->ddc.algo =3D &drm_dp_mst_i2c_algo;
+ aux->ddc.algo_data= =3D aux;
+ aux->ddc.retries =3D 3;
+
+ aux->ddc.class =3D=20 I2C_CLASS_DDC;
+ aux->ddc.owner =3D THIS_MODULE;
+=09 aux->ddc.dev.parent =3D aux->dev;
+ aux->ddc.dev.of_node =3D=20 aux->dev->of_node;
+
+ strlcpy(aux->ddc.name,=20 aux->name ? aux->name : dev_name(aux->dev),
+ =09 sizeof(aux->ddc.name));
+
+ return=20 i2c_add_adapter(&aux->ddc);
+}
+
+/**
+ *=20 drm_dp_mst_unregister_i2c_bus() - unregister an I2C-over-AUX adapter
+ * @aux: DisplayPort AUX channel
+ */
+static void=20 drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux)
+{
+=09 i2c_del_adapter(&aux->ddc);
+}
diff --git=20 a/include/drm/drm_dp_mst_helper.h b/include/drm/drm_dp_mst_helper.h
ne= w file mode 100644
index 0000000..6626d1b
--- /dev/null
+++=20 b/include/drm/drm_dp_mst_helper.h
@@ -0,0 +1,507 @@
+/*
+ *=20 Copyright =C2=A9 2014 Red Hat.
+ *
+ * Permission to use, copy, mod= ify, distribute, and sell this software and its
+ * documentation for any purpose is hereby granted without fee, provided that
+ * the above=20 copyright notice appear in all copies and that both that copyright
+ * notice and this permission notice appear in supporting documentation,=20 and
+ * that the name of the copyright holders not be used in=20 advertising or
+ * publicity pertaining to distribution of the=20 software without specific,
+ * written prior permission. The=20 copyright holders make no representations
+ * about the suitability=20 of this software for any purpose. It is provided "as
+ * is" without express or implied warranty.
+ *
+ * THE COPYRIGHT HOLDERS=20 DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
+ * INCLUDING=20 ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO
+ *=20 EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR<= br>+ * CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS=20 OF USE,
+ * DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,=20 NEGLIGENCE OR OTHER
+ * TORTIOUS ACTION, ARISING OUT OF OR IN=20 CONNECTION WITH THE USE OR PERFORMANCE
+ * OF THIS SOFTWARE.
+ */+#ifndef _DRM_DP_MST_HELPER_H_
+#define _DRM_DP_MST_HELPER_H_
+
+#includ= e <linux/types.h>
+#include <drm/drm_dp_helper.h>
+
+= struct drm_dp_mst_branch;
+
+/**
+ * struct drm_dp_vcpi - Virtual=20 Channel Payload Identifer
+ * @vcpi: Virtual channel ID.
+ * @pbn: Payload Bandwidth Number for this channel
+ * @aligned_pbn: PBN=20 aligned with slot size
+ * @num_slots: number of slots for this PBN+ */
+struct drm_dp_vcpi {
+ int vcpi;
+ int pbn;
+ int=20 aligned_pbn;
+ int num_slots;
+};
+
+/**
+ * struct=20 drm_dp_mst_port - MST port
+ * @kref: reference count for this port.+ * @guid_valid: for DP 1.2 devices if we have validated the GUID.
+ * @guid: guid for DP 1.2 device on this port.
+ * @port_num: port=20 number
+ * @input: if this port is an input port.
+ * @mcs:=20 message capability status - DP 1.2 spec.
+ * @ddps: DisplayPort=20 Device Plug Status - DP 1.2
+ * @pdt: Peer Device Type
+ * @ldps:=20 Legacy Device Plug Status
+ * @dpcd_rev: DPCD revision of device on=20 this port
+ * @available_pbn: Available bandwidth for this port.
+ * @next: link to next port on this branch device
+ * @mstb: branch=20 device attach below this port
+ * @aux: i2c aux transport to talk to=20 device connected to this port.
+ * @parent: branch device parent of=20 this port
+ * @vcpi: Virtual Channel Payload info for this port.
+ * @connector: DRM connector this port is connected to.
+ * @mgr:=20 topology manager this port lives under.
+ *
+ * This structure=20 represents an MST port endpoint on a device somewhere
+ * in the MST=20 topology.
+ */
+struct drm_dp_mst_port {
+ struct kref kref;
= +
+ /* if dpcd 1.2 device is on this port - its GUID info */
+ bool=20 guid_valid;
+ u8 guid[16];
+
+ u8 port_num;
+ bool input;
= + bool mcs;
+ bool ddps;
+ u8 pdt;
+ bool ldps;
+ u8=20 dpcd_rev;
+ uint16_t available_pbn;
+ struct list_head next;
+=09 struct drm_dp_mst_branch *mstb; /* pointer to an mstb if this port has=20 one */
+ struct drm_dp_aux aux; /* i2c bus for this port? */
+=09 struct drm_dp_mst_branch *parent;
+
+ struct drm_dp_vcpi vcpi;
+ struct drm_connector *connector;
+ struct drm_dp_mst_topology_mgr=20 *mgr;
+};
+
+/**
+ * struct drm_dp_mst_branch - MST branch=20 device.
+ * @kref: reference count for this port.
+ * @rad:=20 Relative Address to talk to this branch device.
+ * @lct: Link count=20 total to talk to this branch device.
+ * @num_ports: number of ports=20 on the branch.
+ * @msg_slots: one bit per transmitted msg slot.
+ * @ports: linked list of ports on this branch.
+ * @port_parent:=20 pointer to the port parent, NULL if toplevel.
+ * @mgr: topology=20 manager for this branch device.
+ * @tx_slots: transmission slots for this device.
+ * @last_seqno: last sequence number used to talk to=20 this.
+ * @link_address_sent: if a link address message has been sent to this device yet.
+ *
+ * This structure represents an MST=20 branch device, there is one
+ * primary branch device at the root,=20 along with any others connected
+ * to downstream ports
+ */
+st= ruct drm_dp_mst_branch {
+ struct kref kref;
+ u8 rad[8];
+ u8 lct;<= br>+ int num_ports;
+
+ int msg_slots;
+ struct list_head ports;
= +
+ /* list of tx ops queue for this port */
+ struct drm_dp_mst_port=20 *port_parent;
+ struct drm_dp_mst_topology_mgr *mgr;
+
+ /*=20 slots are protected by mstb->mgr->qlock */
+ struct=20 drm_dp_sideband_msg_tx *tx_slots[2];
+ int last_seqno;
+ bool=20 link_address_sent;
+};
+
+
+/* sideband msg header - not bit struct */
+struct drm_dp_sideband_msg_hdr {
+ u8 lct;
+ u8=20 lcr;
+ u8 rad[8];
+ bool broadcast;
+ bool path_msg;
+ u8=20 msg_len;
+ bool somt;
+ bool eomt;
+ bool seqno;
+};
+
= +struct drm_dp_nak_reply {
+ u8 guid[16];
+ u8 reason;
+ u8 nak_data;+};
+
+struct drm_dp_link_address_ack_reply {
+ u8 guid[16];
+ u8 nports;
+=09 struct drm_dp_link_addr_reply_port {
+ bool input_port;
+ u8=20 peer_device_type;
+ u8 port_number;
+ bool mcs;
+ bool ddps;<= br>+ bool legacy_device_plug_status;
+ u8 dpcd_revision;
+ u8=20 peer_guid[16];
+ bool num_sdp_streams;
+ bool=20 num_sdp_stream_sinks;
+ } ports[16];
+};
+
+struct=20 drm_dp_remote_dpcd_read_ack_reply {
+ u8 port_number;
+ u8=20 num_bytes;
+ u8 bytes[255];
+};
+
+struct=20 drm_dp_remote_dpcd_write_ack_reply {
+ u8 port_number;
+};
+
= +struct drm_dp_remote_dpcd_write_nak_reply {
+ u8 port_number;
+ u8=20 reason;
+ u8 bytes_written_before_failure;
+};
+
+struct=20 drm_dp_remote_i2c_read_ack_reply {
+ u8 port_number;
+ u8=20 num_bytes;
+ u8 bytes[255];
+};
+
+struct=20 drm_dp_remote_i2c_read_nak_reply {
+ u8 port_number;
+ u8=20 nak_reason;
+ u8 i2c_nak_transaction;
+};
+
+struct=20 drm_dp_remote_i2c_write_ack_reply {
+ u8 port_number;
+};
+
+=
+struct drm_dp_sideband_msg_rx {
+ u8 chunk[48];
+ u8 msg[256];
+ u8=20 curchunk_len;
+ u8 curchunk_idx; /* chunk we are parsing now */
+=09 u8 curchunk_hdrlen;
+ u8 curlen; /* total length of the msg */
+=09 bool have_somt;
+ bool have_eomt;
+ struct drm_dp_sideband_msg_hdr initial_hdr;
+};
+
+
+struct drm_dp_allocate_payload {
+ u8 port_number;
+ u8 number_sdp_streams;
+ u8 vcpi;
+ u16 pbn;<= br>+ u8 sdp_stream_sink[8];
+};
+
+struct=20 drm_dp_allocate_payload_ack_reply {
+ u8 port_number;
+ u8 vcpi;+ u16 allocated_pbn;
+};
+
+struct=20 drm_dp_connection_status_notify {
+ u8 guid[16];
+ u8 port_number;<= br>+ bool legacy_device_plug_status;
+ bool=20 displayport_device_plug_status;
+ bool message_capability_status;
+ bool input_port;
+ u8 peer_device_type;
+};
+
+struct=20 drm_dp_remote_dpcd_read {
+ u8 port_number;
+ u32 dpcd_address;
= + u8 num_bytes;
+};
+
+struct drm_dp_remote_dpcd_write {
+ u8 port_number;
+ u32 dpcd_address;
+ u8 num_bytes;
+ u8=20 bytes[255];
+};
+
+struct drm_dp_remote_i2c_read {
+ u8=20 num_transactions;
+ u8 port_number;
+ struct {
+ u8=20 i2c_dev_id;
+ u8 num_bytes;
+ u8 bytes[255];
+ u8=20 no_stop_bit;
+ u8 i2c_transaction_delay;
+ } transactions[4];
+ u8 read_i2c_device_id;
+ u8 num_bytes_read;
+};
+
+struct=20 drm_dp_remote_i2c_write {
+ u8 port_number;
+ u8=20 write_i2c_device_id;
+ u8 num_bytes;
+ u8 bytes[255];
+};
++/* this covers ENUM_RESOURCES, POWER_DOWN_PHY, POWER_UP_PHY */
+struct=20 drm_dp_port_number_req {
+ u8 port_number;
+};
+
+struct=20 drm_dp_enum_path_resources_ack_reply {
+ u8 port_number;
+ u16=20 full_payload_bw_number;
+ u16 avail_payload_bw_number;
+};
+
= +/* covers POWER_DOWN_PHY, POWER_UP_PHY */
+struct=20 drm_dp_port_number_rep {
+ u8 port_number;
+};
+
+struct=20 drm_dp_query_payload {
+ u8 port_number;
+ u8 vcpi;
+};
+
= +struct drm_dp_resource_status_notify {
+ u8 port_number;
+ u8 guid[16];+ u16 available_pbn;
+};
+
+struct=20 drm_dp_query_payload_ack_reply {
+ u8 port_number;
+ u8=20 allocated_pbn;
+};
+
+struct drm_dp_sideband_msg_req_body {
+ u8 req_type;
+ union ack_req {
+ struct=20 drm_dp_connection_status_notify conn_stat;
+ struct=20 drm_dp_port_number_req port_num;
+ struct=20 drm_dp_resource_status_notify resource_stat;
+
+ struct=20 drm_dp_query_payload query_payload;
+ struct drm_dp_allocate_payload allocate_payload;
+
+ struct drm_dp_remote_dpcd_read dpcd_read;+ struct drm_dp_remote_dpcd_write dpcd_write;
+
+ struct=20 drm_dp_remote_i2c_read i2c_read;
+ struct drm_dp_remote_i2c_write=20 i2c_write;
+ } u;
+};
+
+struct=20 drm_dp_sideband_msg_reply_body {
+ u8 reply_type;
+ u8 req_type;+ union ack_replies {
+ struct drm_dp_nak_reply nak;
+ struct=20 drm_dp_link_address_ack_reply link_addr;
+ struct=20 drm_dp_port_number_rep port_number;
+
+ struct=20 drm_dp_enum_path_resources_ack_reply path_resources;
+ struct=20 drm_dp_allocate_payload_ack_reply allocate_payload;
+ struct=20 drm_dp_query_payload_ack_reply query_payload;
+
+ struct=20 drm_dp_remote_dpcd_read_ack_reply remote_dpcd_read_ack;
+ struct=20 drm_dp_remote_dpcd_write_ack_reply remote_dpcd_write_ack;
+ struct=20 drm_dp_remote_dpcd_write_nak_reply remote_dpcd_write_nack;
+
+ =09 struct drm_dp_remote_i2c_read_ack_reply remote_i2c_read_ack;
+ =09 struct drm_dp_remote_i2c_read_nak_reply remote_i2c_read_nack;
+ =09 struct drm_dp_remote_i2c_write_ack_reply remote_i2c_write_ack;
+ } u;<= br>+};
+
+/* msg is queued to be put into a slot */
+#define=20 DRM_DP_SIDEBAND_TX_QUEUED 0
+/* msg has started transmitting on a=20 slot - still on msgq */
+#define DRM_DP_SIDEBAND_TX_START_SEND 1
+/= * msg has finished transmitting on a slot - removed from msgq only in=20 slot */
+#define DRM_DP_SIDEBAND_TX_SENT 2
+/* msg has received a=20 response - removed from slot */
+#define DRM_DP_SIDEBAND_TX_RX 3
+#= define DRM_DP_SIDEBAND_TX_TIMEOUT 4
+
+struct drm_dp_sideband_msg_tx {+ u8 msg[256];
+ u8 chunk[48];
+ u8 cur_offset;
+ u8 cur_len;
= + struct drm_dp_mst_branch *dst;
+ struct list_head next;
+ int=20 seqno;
+ int state;
+ bool path_msg;
+ struct=20 drm_dp_sideband_msg_reply_body reply;
+};
+
+/* sideband msg=20 handler */
+struct drm_dp_mst_topology_mgr;
+struct=20 drm_dp_mst_topology_cbs {
+ /* create a connector for a port */
+=09 struct drm_connector *(*add_connector)(struct drm_dp_mst_topology_mgr=20 *mgr, struct drm_dp_mst_port *port, char *path);
+ void=20 (*destroy_connector)(struct drm_dp_mst_topology_mgr *mgr,
+ =20 struct drm_connector *connector);
+ void (*hotplug)(struct=20 drm_dp_mst_topology_mgr *mgr);
+
+};
+
+#define=20 DP_MAX_PAYLOAD (sizeof(unsigned long) * 8)
+
+#define=20 DP_PAYLOAD_LOCAL 1
+#define DP_PAYLOAD_REMOTE 2
+#define=20 DP_PAYLOAD_DELETE_LOCAL 3
+
+struct drm_dp_payload {
+ int=20 payload_state;
+ int start_slot;
+ int num_slots;
+};
+
+/= **
+ * struct drm_dp_mst_topology_mgr - DisplayPort MST manager
+ * @dev: device pointer for adding i2c devices etc.
+ * @cbs: callbacks for=20 connector addition and destruction.
+ * @max_dpcd_transaction_bytes - maximum number of bytes to read/write in one go.
+ * @aux: aux=20 channel for the DP connector.
+ * @max_payloads: maximum number of=20 payloads the GPU can generate.
+ * @conn_base_id: DRM connector ID=20 this mgr is connected to.
+ * @down_rep_recv: msg receiver state for=20 down replies.
+ * @up_req_recv: msg receiver state for up requests.+ * @lock: protects mst state, primary, guid, dpcd.
+ * @aux_lock:=20 protects aux channel.
+ * @mst_state: if this manager is enabled for=20 an MST capable port.
+ * @mst_primary: pointer to the primary branch=20 device.
+ * @guid_valid: GUID valid for the primary branch device.
= + * @guid: GUID for primary port.
+ * @dpcd: cache of DPCD for primary port.
+ * @pbn_div: PBN to slots divisor.
+ *
+ * This struct=20 represents the toplevel displayport MST topology manager.
+ * There=20 should be one instance of this for every MST capable DP connector
+ * on the GPU.
+ */
+struct drm_dp_mst_topology_mgr {
+
+=09 struct device *dev;
+ struct drm_dp_mst_topology_cbs *cbs;
+ int=20 max_dpcd_transaction_bytes;
+ struct drm_dp_aux *aux; /* auxch for=20 this topology mgr to use */
+ int max_payloads;
+ int=20 conn_base_id;
+
+ /* only ever accessed from the workqueue - which should be serialised */
+ struct drm_dp_sideband_msg_rx=20 down_rep_recv;
+ struct drm_dp_sideband_msg_rx up_req_recv;
+
+ /* pointer to info about the initial MST device */
+ struct mutex=20 lock; /* protects mst_state + primary + guid + dpcd */
+
+ struct=20 mutex aux_lock; /* protect access to the AUX */
+ bool mst_state;
+ struct drm_dp_mst_branch *mst_primary;
+ /* primary MST device GUID=20 */
+ bool guid_valid;
+ u8 guid[16];
+ u8=20 dpcd[DP_RECEIVER_CAP_SIZE];
+ u8 sink_count;
+ int pbn_div;
+=09 int total_slots;
+ int avail_slots;
+ int total_pbn;
+
+ /*=20 messages to be transmitted */
+ /* qlock protects the upq/downq and=20 in_progress,
+ the mstb tx_slots and txmsg->state once they are queued */
+ struct mutex qlock;
+ struct list_head tx_msg_downq;+ struct list_head tx_msg_upq;
+ bool tx_down_in_progress;
+ bool=20 tx_up_in_progress;
+
+ /* payload info + lock for it */
+=09 struct mutex payload_lock;
+ struct drm_dp_vcpi **proposed_vcpis;
+ struct drm_dp_payload *payloads;
+ unsigned long payload_mask;
++ wait_queue_head_t tx_waitq;
+ struct work_struct work;
+
+=09 struct work_struct tx_work;
+};
+
+int=20 drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr, struct device *dev, struct drm_dp_aux *aux, int max_dpcd_transaction_bytes,=20 int max_payloads, int conn_base_id);
+
+void=20 drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr);
= +
+
+int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr,=20 bool mst_state);
+
+
+int drm_dp_mst_hpd_irq(struct=20 drm_dp_mst_topology_mgr *mgr, u8 *esi, bool *handled);
+
+
+enum drm_connector_status drm_dp_mst_detect_port(struct=20 drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);
+
+str= uct edid *drm_dp_mst_get_edid(struct drm_connector *connector, struct=20 drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);
+
++int drm_dp_calc_pbn_mode(int clock, int bpp);
+
+
+bool=20 drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, struct=20 drm_dp_mst_port *port, int pbn, int *slots);
+
+
+void=20 drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct=20 drm_dp_mst_port *port);
+
+
+void=20 drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
+ =09 struct drm_dp_mst_port *port);
+
+
+int=20 drm_dp_find_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr,
+ =20 int pbn);
+
+
+int drm_dp_update_payload_part1(struct=20 drm_dp_mst_topology_mgr *mgr);
+
+
+int=20 drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr);
++int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr);
+
+v= oid drm_dp_mst_dump_topology(struct seq_file *m,
+ struct=20 drm_dp_mst_topology_mgr *mgr);
+
+void=20 drm_dp_mst_topology_mgr_suspend(struct drm_dp_mst_topology_mgr *mgr);
= +int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr);
= +#endif
Dave Airlie = =20 Tuesday, May 2= 0,=20 2014 7:54 PM
Hey,

So this set= is pretty close to what I think we should be merging initially,

Sinc= e the last set, it makes fbcon and suspend/resume work a lot better,
I've also fixed a couple of bugs in -intel that make things work a lot
bet= ter.

I've bashed on this a bit using kms-flip from intel-gpu-tools, hacked
to=20 add 3 monitor support.

It still generates a fair few i915 state=20 checker backtraces, and some
of them are fairly hard to work out, it=20 might be we should just tone
down the state checker for=20 encoders/connectors with no actual hw backing
them.

Dave.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/inte= l-gfx

--
Sent using Postbox:
http://www.getpostbox.com
--------------010607080907070602060102 Content-Type: image/jpeg; x-apple-mail-type=stationery; name="postbox-contact.jpg" Content-Transfer-Encoding: base64 Content-ID: Content-Disposition: inline; filename="postbox-contact.jpg" /9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAMCAgICAgMCAgIDAwMDBAYEBAQEBAgGBgUGCQgK CgkICQkKDA8MCgsOCwkJDRENDg8QEBEQCgwSExIQEw8QEBD/2wBDAQMDAwQDBAgEBAgQCwkL EBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBD/wAAR CAAZABkDAREAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAA AgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkK FhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWG h4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl 5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREA AgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYk NOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOE hYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk 5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD4ikvNV8RaufG/jlrTXL/Wre3vtt7YwSvc O8KGSaeVkMhUybwsasvC8FFVA3DjcfDBru30PquGOFcRxHVfK+WnHeX6JdWWHv8AdxHofhaJ O0aeF9NYAem54Gf8SxPvXiPOq97pI/UYeGGVKnyynNvvdflYpx2Nxda7p11o2n6TYXUF1FMU s9Ngi8yNGDOYnCeZDKqguCr87SV2MoDetgcxji/datI/O+KuDK/DyVenLnpPS/VPsyH/AIbY /ao/6LNrX/fuD/43XpnxB9O+BP2LLr4meGzdW/xi0Lw7ruk2WnwT6ZqNqwhkhXT7bZLHOG5B O5SCg5U9eteHicNHF1FPuff5FxbVyHDfUuXRO911v3v9xoXH/BN/4mwafNqn/C6vh+LW2nSC eeS9mWGJ3UMoaQRlRnK85xyMkc4j+yPQ9r/iIf8Ai/AW1/YpuPA0Fh4v1T46+FdaubPVtPEW naJC92JSbyJG3TZVUAViehzx65ohgvqs/a9jz8043qZrhpYLlup6a2089OqPm3/hlW1/6HEf +AP/ANsr0PrnkfB/U/M9R+DHxf8Agf8AFbxKfB37TVveeDdcsrO00vT9WgnNtAJICyyw3QkR xCzFj8zKVUpg4zWiouh8GqMalb2rXMj7Ptf2ZP2VLTwLdaFZ3iXcepXNvdR6nNK0978gbaIr yEi1AIk442c5bdhSuftal7k2Vj5F/aM+KH7PPwCWPwn+z1OPE/jRr+Ce61y7uor+DRreJsm3 iliVYpZpGxv2gqqjbktyNVCVZWqbAp+zfNHc+e/+Ek/al/6FbxR/4Tj/APxqr+r0+w/bz7jv 22f+TqPiJ/2E0/8AREdbmJ5En/ILb/eNSwNn4Tf8lU8G/wDYwad/6Ux1QH77UAf/2Q== --------------010607080907070602060102-- --------------070900050502070101020808-- --===============1231931286== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/intel-gfx --===============1231931286==--