* [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user
@ 2015-06-13 13:07 Chen Gang
2015-06-13 13:08 ` [Qemu-devel] [PATCH 01/10 v12] linux-user: tilegx: Firstly add architecture related features Chen Gang
` (10 more replies)
0 siblings, 11 replies; 21+ messages in thread
From: Chen Gang @ 2015-06-13 13:07 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
It can finish running "Hello world" elf64 binary, and the related test
cases:
- with "--enable-debug", enable assertion with "-g":
./tilegx-linux-user/qemu-tilegx -L /upstream/release-tile /upstream/release-tile/test/test_shared
./tilegx-linux-user/qemu-tilegx -d all -L /upstream/release-tile /upstream/release-tile/test/test_shared > /tmp/a.log
./tilegx-linux-user/qemu-tilegx /upstream/release-tile/test/test_static
./tilegx-linux-user/qemu-tilegx -d all /upstream/release-tile/test/test_static > /tmp/b.log
- without "--enable-debug", disable assertion with "-O2 -g":
./tilegx-linux-user/qemu-tilegx -L /upstream/release-tile /upstream/release-tile/test/test_shared
./tilegx-linux-user/qemu-tilegx -d all -L /upstream/release-tile /upstream/release-tile/test/test_shared > /tmp/c.log
./tilegx-linux-user/qemu-tilegx /upstream/release-tile/test/test_static
./tilegx-linux-user/qemu-tilegx -d all /upstream/release-tile/test/test_static > /tmp/d.log
Chen Gang (10):
linux-user: tilegx: Firstly add architecture related features
linux-user: Support tilegx architecture in linux-user
linux-user/syscall.c: conditionalize syscalls which are not defined in
tilegx
target-tilegx: Add opcode basic implementation from Tilera Corporation
target-tilegx/opcode_tilegx.h: Modify it to fit QEMU usage
target-tilegx: Add special register information from Tilera
Corporation
target-tilegx: Add cpu basic features for linux-user
target-tilegx: Add several helpers for instructions translation
target-tilegx: Generate tcg instructions to finish "Hello world"
target-tilegx: Add TILE-Gx building files
configure | 2 +
default-configs/tilegx-linux-user.mak | 1 +
include/elf.h | 2 +
linux-user/elfload.c | 23 +
linux-user/main.c | 295 ++++
linux-user/syscall.c | 50 +-
linux-user/syscall_defs.h | 14 +-
linux-user/tilegx/syscall.h | 40 +
linux-user/tilegx/syscall_nr.h | 324 ++++
linux-user/tilegx/target_cpu.h | 35 +
linux-user/tilegx/target_signal.h | 28 +
linux-user/tilegx/target_structs.h | 46 +
linux-user/tilegx/termbits.h | 274 +++
target-tilegx/Makefile.objs | 1 +
target-tilegx/cpu.c | 143 ++
target-tilegx/cpu.h | 175 ++
target-tilegx/helper.c | 83 +
target-tilegx/helper.h | 5 +
target-tilegx/opcode_tilegx.h | 1406 ++++++++++++++++
target-tilegx/spr_def_64.h | 216 +++
target-tilegx/translate.c | 2966 +++++++++++++++++++++++++++++++++
21 files changed, 6123 insertions(+), 6 deletions(-)
create mode 100644 default-configs/tilegx-linux-user.mak
create mode 100644 linux-user/tilegx/syscall.h
create mode 100644 linux-user/tilegx/syscall_nr.h
create mode 100644 linux-user/tilegx/target_cpu.h
create mode 100644 linux-user/tilegx/target_signal.h
create mode 100644 linux-user/tilegx/target_structs.h
create mode 100644 linux-user/tilegx/termbits.h
create mode 100644 target-tilegx/Makefile.objs
create mode 100644 target-tilegx/cpu.c
create mode 100644 target-tilegx/cpu.h
create mode 100644 target-tilegx/helper.c
create mode 100644 target-tilegx/helper.h
create mode 100644 target-tilegx/opcode_tilegx.h
create mode 100644 target-tilegx/spr_def_64.h
create mode 100644 target-tilegx/translate.c
--
1.9.3
^ permalink raw reply [flat|nested] 21+ messages in thread
* [Qemu-devel] [PATCH 01/10 v12] linux-user: tilegx: Firstly add architecture related features
2015-06-13 13:07 [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Chen Gang
@ 2015-06-13 13:08 ` Chen Gang
2015-06-13 13:10 ` [Qemu-devel] [PATCH 02/10 v12] linux-user: Support tilegx architecture in linux-user Chen Gang
` (9 subsequent siblings)
10 siblings, 0 replies; 21+ messages in thread
From: Chen Gang @ 2015-06-13 13:08 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
They are based on Linux kernel tilegx architecture for 64 bit binary,
and also based on tilegx ABI reference document, and also reference from
other targets implementations.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
linux-user/tilegx/syscall.h | 40 +++++
linux-user/tilegx/syscall_nr.h | 324 +++++++++++++++++++++++++++++++++++++
linux-user/tilegx/target_cpu.h | 35 ++++
linux-user/tilegx/target_signal.h | 28 ++++
linux-user/tilegx/target_structs.h | 46 ++++++
linux-user/tilegx/termbits.h | 274 +++++++++++++++++++++++++++++++
6 files changed, 747 insertions(+)
create mode 100644 linux-user/tilegx/syscall.h
create mode 100644 linux-user/tilegx/syscall_nr.h
create mode 100644 linux-user/tilegx/target_cpu.h
create mode 100644 linux-user/tilegx/target_signal.h
create mode 100644 linux-user/tilegx/target_structs.h
create mode 100644 linux-user/tilegx/termbits.h
diff --git a/linux-user/tilegx/syscall.h b/linux-user/tilegx/syscall.h
new file mode 100644
index 0000000..653ece1
--- /dev/null
+++ b/linux-user/tilegx/syscall.h
@@ -0,0 +1,40 @@
+#ifndef TILEGX_SYSCALLS_H
+#define TILEGX_SYSCALLS_H
+
+#define UNAME_MACHINE "tilegx"
+#define UNAME_MINIMUM_RELEASE "3.19"
+
+#define MMAP_SHIFT TARGET_PAGE_BITS
+
+#define TILEGX_IS_ERRNO(ret) \
+ ((ret) > 0xfffffffffffff000ULL) /* errno is 0 -- 4096 */
+
+typedef uint64_t tilegx_reg_t;
+
+struct target_pt_regs {
+
+ union {
+ /* Saved main processor registers; 56..63 are special. */
+ tilegx_reg_t regs[56];
+ struct {
+ tilegx_reg_t __regs[53];
+ tilegx_reg_t tp; /* aliases regs[TREG_TP] */
+ tilegx_reg_t sp; /* aliases regs[TREG_SP] */
+ tilegx_reg_t lr; /* aliases regs[TREG_LR] */
+ };
+ };
+
+ /* Saved special registers. */
+ tilegx_reg_t pc; /* stored in EX_CONTEXT_K_0 */
+ tilegx_reg_t ex1; /* stored in EX_CONTEXT_K_1 (PL and ICS bit) */
+ tilegx_reg_t faultnum; /* fault number (INT_SWINT_1 for syscall) */
+ tilegx_reg_t orig_r0; /* r0 at syscall entry, else zero */
+ tilegx_reg_t flags; /* flags (see below) */
+ tilegx_reg_t cmpexch; /* value of CMPEXCH_VALUE SPR at interrupt */
+ tilegx_reg_t pad[2];
+};
+
+#define TARGET_MLOCKALL_MCL_CURRENT 1
+#define TARGET_MLOCKALL_MCL_FUTURE 2
+
+#endif
diff --git a/linux-user/tilegx/syscall_nr.h b/linux-user/tilegx/syscall_nr.h
new file mode 100644
index 0000000..1dca348
--- /dev/null
+++ b/linux-user/tilegx/syscall_nr.h
@@ -0,0 +1,324 @@
+#ifndef TILEGX_SYSCALL_NR
+#define TILEGX_SYSCALL_NR
+
+/*
+ * Copy from linux kernel asm-generic/unistd.h, which tilegx uses.
+ */
+#define TARGET_NR_io_setup 0
+#define TARGET_NR_io_destroy 1
+#define TARGET_NR_io_submit 2
+#define TARGET_NR_io_cancel 3
+#define TARGET_NR_io_getevents 4
+#define TARGET_NR_setxattr 5
+#define TARGET_NR_lsetxattr 6
+#define TARGET_NR_fsetxattr 7
+#define TARGET_NR_getxattr 8
+#define TARGET_NR_lgetxattr 9
+#define TARGET_NR_fgetxattr 10
+#define TARGET_NR_listxattr 11
+#define TARGET_NR_llistxattr 12
+#define TARGET_NR_flistxattr 13
+#define TARGET_NR_removexattr 14
+#define TARGET_NR_lremovexattr 15
+#define TARGET_NR_fremovexattr 16
+#define TARGET_NR_getcwd 17
+#define TARGET_NR_lookup_dcookie 18
+#define TARGET_NR_eventfd2 19
+#define TARGET_NR_epoll_create1 20
+#define TARGET_NR_epoll_ctl 21
+#define TARGET_NR_epoll_pwait 22
+#define TARGET_NR_dup 23
+#define TARGET_NR_dup3 24
+#define TARGET_NR_fcntl 25
+#define TARGET_NR_inotify_init1 26
+#define TARGET_NR_inotify_add_watch 27
+#define TARGET_NR_inotify_rm_watch 28
+#define TARGET_NR_ioctl 29
+#define TARGET_NR_ioprio_set 30
+#define TARGET_NR_ioprio_get 31
+#define TARGET_NR_flock 32
+#define TARGET_NR_mknodat 33
+#define TARGET_NR_mkdirat 34
+#define TARGET_NR_unlinkat 35
+#define TARGET_NR_symlinkat 36
+#define TARGET_NR_linkat 37
+#define TARGET_NR_renameat 38
+#define TARGET_NR_umount2 39
+#define TARGET_NR_mount 40
+#define TARGET_NR_pivot_root 41
+#define TARGET_NR_nfsservctl 42
+#define TARGET_NR_statfs 43
+#define TARGET_NR_fstatfs 44
+#define TARGET_NR_truncate 45
+#define TARGET_NR_ftruncate 46
+#define TARGET_NR_fallocate 47
+#define TARGET_NR_faccessat 48
+#define TARGET_NR_chdir 49
+#define TARGET_NR_fchdir 50
+#define TARGET_NR_chroot 51
+#define TARGET_NR_fchmod 52
+#define TARGET_NR_fchmodat 53
+#define TARGET_NR_fchownat 54
+#define TARGET_NR_fchown 55
+#define TARGET_NR_openat 56
+#define TARGET_NR_close 57
+#define TARGET_NR_vhangup 58
+#define TARGET_NR_pipe2 59
+#define TARGET_NR_quotactl 60
+#define TARGET_NR_getdents64 61
+#define TARGET_NR_lseek 62
+#define TARGET_NR_read 63
+#define TARGET_NR_write 64
+#define TARGET_NR_readv 65
+#define TARGET_NR_writev 66
+#define TARGET_NR_pread64 67
+#define TARGET_NR_pwrite64 68
+#define TARGET_NR_preadv 69
+#define TARGET_NR_pwritev 70
+#define TARGET_NR_sendfile 71
+#define TARGET_NR_pselect6 72
+#define TARGET_NR_ppoll 73
+#define TARGET_NR_signalfd4 74
+#define TARGET_NR_vmsplice 75
+#define TARGET_NR_splice 76
+#define TARGET_NR_tee 77
+#define TARGET_NR_readlinkat 78
+#define TARGET_NR_fstatat64 79 /* let syscall.c known */
+#define TARGET_NR_fstat 80
+#define TARGET_NR_sync 81
+#define TARGET_NR_fsync 82
+#define TARGET_NR_fdatasync 83
+#define TARGET_NR_sync_file_range 84 /* For tilegx, no range2 */
+#define TARGET_NR_timerfd_create 85
+#define TARGET_NR_timerfd_settime 86
+#define TARGET_NR_timerfd_gettime 87
+#define TARGET_NR_utimensat 88
+#define TARGET_NR_acct 89
+#define TARGET_NR_capget 90
+#define TARGET_NR_capset 91
+#define TARGET_NR_personality 92
+#define TARGET_NR_exit 93
+#define TARGET_NR_exit_group 94
+#define TARGET_NR_waitid 95
+#define TARGET_NR_set_tid_address 96
+#define TARGET_NR_unshare 97
+#define TARGET_NR_futex 98
+#define TARGET_NR_set_robust_list 99
+#define TARGET_NR_get_robust_list 100
+#define TARGET_NR_nanosleep 101
+#define TARGET_NR_getitimer 102
+#define TARGET_NR_setitimer 103
+#define TARGET_NR_kexec_load 104
+#define TARGET_NR_init_module 105
+#define TARGET_NR_delete_module 106
+#define TARGET_NR_timer_create 107
+#define TARGET_NR_timer_gettime 108
+#define TARGET_NR_timer_getoverrun 109
+#define TARGET_NR_timer_settime 110
+#define TARGET_NR_timer_delete 111
+#define TARGET_NR_clock_settime 112
+#define TARGET_NR_clock_gettime 113
+#define TARGET_NR_clock_getres 114
+#define TARGET_NR_clock_nanosleep 115
+#define TARGET_NR_syslog 116
+#define TARGET_NR_ptrace 117
+#define TARGET_NR_sched_setparam 118
+#define TARGET_NR_sched_setscheduler 119
+#define TARGET_NR_sched_getscheduler 120
+#define TARGET_NR_sched_getparam 121
+#define TARGET_NR_sched_setaffinity 122
+#define TARGET_NR_sched_getaffinity 123
+#define TARGET_NR_sched_yield 124
+#define TARGET_NR_sched_get_priority_max 125
+#define TARGET_NR_sched_get_priority_min 126
+#define TARGET_NR_sched_rr_get_interval 127
+#define TARGET_NR_restart_syscall 128
+#define TARGET_NR_kill 129
+#define TARGET_NR_tkill 130
+#define TARGET_NR_tgkill 131
+#define TARGET_NR_sigaltstack 132
+#define TARGET_NR_rt_sigsuspend 133
+#define TARGET_NR_rt_sigaction 134
+#define TARGET_NR_rt_sigprocmask 135
+#define TARGET_NR_rt_sigpending 136
+#define TARGET_NR_rt_sigtimedwait 137
+#define TARGET_NR_rt_sigqueueinfo 138
+#define TARGET_NR_rt_sigreturn 139
+#define TARGET_NR_setpriority 140
+#define TARGET_NR_getpriority 141
+#define TARGET_NR_reboot 142
+#define TARGET_NR_setregid 143
+#define TARGET_NR_setgid 144
+#define TARGET_NR_setreuid 145
+#define TARGET_NR_setuid 146
+#define TARGET_NR_setresuid 147
+#define TARGET_NR_getresuid 148
+#define TARGET_NR_setresgid 149
+#define TARGET_NR_getresgid 150
+#define TARGET_NR_setfsuid 151
+#define TARGET_NR_setfsgid 152
+#define TARGET_NR_times 153
+#define TARGET_NR_setpgid 154
+#define TARGET_NR_getpgid 155
+#define TARGET_NR_getsid 156
+#define TARGET_NR_setsid 157
+#define TARGET_NR_getgroups 158
+#define TARGET_NR_setgroups 159
+#define TARGET_NR_uname 160
+#define TARGET_NR_sethostname 161
+#define TARGET_NR_setdomainname 162
+#define TARGET_NR_getrlimit 163
+#define TARGET_NR_setrlimit 164
+#define TARGET_NR_getrusage 165
+#define TARGET_NR_umask 166
+#define TARGET_NR_prctl 167
+#define TARGET_NR_getcpu 168
+#define TARGET_NR_gettimeofday 169
+#define TARGET_NR_settimeofday 170
+#define TARGET_NR_adjtimex 171
+#define TARGET_NR_getpid 172
+#define TARGET_NR_getppid 173
+#define TARGET_NR_getuid 174
+#define TARGET_NR_geteuid 175
+#define TARGET_NR_getgid 176
+#define TARGET_NR_getegid 177
+#define TARGET_NR_gettid 178
+#define TARGET_NR_sysinfo 179
+#define TARGET_NR_mq_open 180
+#define TARGET_NR_mq_unlink 181
+#define TARGET_NR_mq_timedsend 182
+#define TARGET_NR_mq_timedreceive 183
+#define TARGET_NR_mq_notify 184
+#define TARGET_NR_mq_getsetattr 185
+#define TARGET_NR_msgget 186
+#define TARGET_NR_msgctl 187
+#define TARGET_NR_msgrcv 188
+#define TARGET_NR_msgsnd 189
+#define TARGET_NR_semget 190
+#define TARGET_NR_semctl 191
+#define TARGET_NR_semtimedop 192
+#define TARGET_NR_semop 193
+#define TARGET_NR_shmget 194
+#define TARGET_NR_shmctl 195
+#define TARGET_NR_shmat 196
+#define TARGET_NR_shmdt 197
+#define TARGET_NR_socket 198
+#define TARGET_NR_socketpair 199
+#define TARGET_NR_bind 200
+#define TARGET_NR_listen 201
+#define TARGET_NR_accept 202
+#define TARGET_NR_connect 203
+#define TARGET_NR_getsockname 204
+#define TARGET_NR_getpeername 205
+#define TARGET_NR_sendto 206
+#define TARGET_NR_recvfrom 207
+#define TARGET_NR_setsockopt 208
+#define TARGET_NR_getsockopt 209
+#define TARGET_NR_shutdown 210
+#define TARGET_NR_sendmsg 211
+#define TARGET_NR_recvmsg 212
+#define TARGET_NR_readahead 213
+#define TARGET_NR_brk 214
+#define TARGET_NR_munmap 215
+#define TARGET_NR_mremap 216
+#define TARGET_NR_add_key 217
+#define TARGET_NR_request_key 218
+#define TARGET_NR_keyctl 219
+#define TARGET_NR_clone 220
+#define TARGET_NR_execve 221
+#define TARGET_NR_mmap 222
+#define TARGET_NR_fadvise64 223
+#define TARGET_NR_swapon 224
+#define TARGET_NR_swapoff 225
+#define TARGET_NR_mprotect 226
+#define TARGET_NR_msync 227
+#define TARGET_NR_mlock 228
+#define TARGET_NR_munlock 229
+#define TARGET_NR_mlockall 230
+#define TARGET_NR_munlockall 231
+#define TARGET_NR_mincore 232
+#define TARGET_NR_madvise 233
+#define TARGET_NR_remap_file_pages 234
+#define TARGET_NR_mbind 235
+#define TARGET_NR_get_mempolicy 236
+#define TARGET_NR_set_mempolicy 237
+#define TARGET_NR_migrate_pages 238
+#define TARGET_NR_move_pages 239
+#define TARGET_NR_rt_tgsigqueueinfo 240
+#define TARGET_NR_perf_event_open 241
+#define TARGET_NR_accept4 242
+#define TARGET_NR_recvmmsg 243
+
+#define TARGET_NR_arch_specific_syscall 244
+#define TARGET_NR_cacheflush 245 /* tilegx own syscall */
+
+#define TARGET_NR_wait4 260
+#define TARGET_NR_prlimit64 261
+#define TARGET_NR_fanotify_init 262
+#define TARGET_NR_fanotify_mark 263
+#define TARGET_NR_name_to_handle_at 264
+#define TARGET_NR_open_by_handle_at 265
+#define TARGET_NR_clock_adjtime 266
+#define TARGET_NR_syncfs 267
+#define TARGET_NR_setns 268
+#define TARGET_NR_sendmmsg 269
+#define TARGET_NR_process_vm_readv 270
+#define TARGET_NR_process_vm_writev 271
+#define TARGET_NR_kcmp 272
+#define TARGET_NR_finit_module 273
+#define TARGET_NR_sched_setattr 274
+#define TARGET_NR_sched_getattr 275
+#define TARGET_NR_renameat2 276
+#define TARGET_NR_seccomp 277
+#define TARGET_NR_getrandom 278
+#define TARGET_NR_memfd_create 279
+#define TARGET_NR_bpf 280
+#define TARGET_NR_execveat 281
+
+#define TARGET_NR_open 1024
+#define TARGET_NR_link 1025
+#define TARGET_NR_unlink 1026
+#define TARGET_NR_mknod 1027
+#define TARGET_NR_chmod 1028
+#define TARGET_NR_chown 1029
+#define TARGET_NR_mkdir 1030
+#define TARGET_NR_rmdir 1031
+#define TARGET_NR_lchown 1032
+#define TARGET_NR_access 1033
+#define TARGET_NR_rename 1034
+#define TARGET_NR_readlink 1035
+#define TARGET_NR_symlink 1036
+#define TARGET_NR_utimes 1037
+#define TARGET_NR_stat64 1038 /* let syscall.c known */
+#define TARGET_NR_lstat 1039
+
+#define TARGET_NR_pipe 1040
+#define TARGET_NR_dup2 1041
+#define TARGET_NR_epoll_create 1042
+#define TARGET_NR_inotify_init 1043
+#define TARGET_NR_eventfd 1044
+#define TARGET_NR_signalfd 1045
+
+#define TARGET_NR_alarm 1059
+#define TARGET_NR_getpgrp 1060
+#define TARGET_NR_pause 1061
+#define TARGET_NR_time 1062
+#define TARGET_NR_utime 1063
+#define TARGET_NR_creat 1064
+#define TARGET_NR_getdents 1065
+#define TARGET_NR_futimesat 1066
+#define TARGET_NR_select 1067
+#define TARGET_NR_poll 1068
+#define TARGET_NR_epoll_wait 1069
+#define TARGET_NR_ustat 1070
+#define TARGET_NR_vfork 1071
+#define TARGET_NR_oldwait4 1072
+#define TARGET_NR_recv 1073
+#define TARGET_NR_send 1074
+#define TARGET_NR_bdflush 1075
+#define TARGET_NR_umount 1076
+#define TARGET_NR_uselib 1077
+#define TARGET_NR__sysctl 1078
+#define TARGET_NR_fork 1079
+
+#endif
diff --git a/linux-user/tilegx/target_cpu.h b/linux-user/tilegx/target_cpu.h
new file mode 100644
index 0000000..c96e81d
--- /dev/null
+++ b/linux-user/tilegx/target_cpu.h
@@ -0,0 +1,35 @@
+/*
+ * TILE-Gx specific CPU ABI and functions for linux-user
+ *
+ * Copyright (c) 2015 Chen Gang
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef TARGET_CPU_H
+#define TARGET_CPU_H
+
+static inline void cpu_clone_regs(CPUTLGState *env, target_ulong newsp)
+{
+ if (newsp) {
+ env->regs[TILEGX_R_SP] = newsp;
+ }
+ env->regs[TILEGX_R_RE] = 0;
+}
+
+static inline void cpu_set_tls(CPUTLGState *env, target_ulong newtls)
+{
+ env->regs[TILEGX_R_TP] = newtls;
+}
+
+#endif
diff --git a/linux-user/tilegx/target_signal.h b/linux-user/tilegx/target_signal.h
new file mode 100644
index 0000000..b595f98
--- /dev/null
+++ b/linux-user/tilegx/target_signal.h
@@ -0,0 +1,28 @@
+#ifndef TARGET_SIGNAL_H
+#define TARGET_SIGNAL_H
+
+#include "cpu.h"
+
+/* this struct defines a stack used during syscall handling */
+
+typedef struct target_sigaltstack {
+ abi_ulong ss_sp;
+ abi_int ss_flags;
+ abi_ulong ss_size;
+} target_stack_t;
+
+/*
+ * sigaltstack controls
+ */
+#define TARGET_SS_ONSTACK 1
+#define TARGET_SS_DISABLE 2
+
+#define TARGET_MINSIGSTKSZ 2048
+#define TARGET_SIGSTKSZ 8192
+
+static inline abi_ulong get_sp_from_cpustate(CPUTLGState *state)
+{
+ return state->regs[TILEGX_R_SP];
+}
+
+#endif /* TARGET_SIGNAL_H */
diff --git a/linux-user/tilegx/target_structs.h b/linux-user/tilegx/target_structs.h
new file mode 100644
index 0000000..7d3ff78
--- /dev/null
+++ b/linux-user/tilegx/target_structs.h
@@ -0,0 +1,46 @@
+/*
+ * TILE-Gx specific structures for linux-user
+ *
+ * Copyright (c) 2015 Chen Gang
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef TARGET_STRUCTS_H
+#define TARGET_STRUCTS_H
+
+struct target_ipc_perm {
+ abi_int __key; /* Key. */
+ abi_uint uid; /* Owner's user ID. */
+ abi_uint gid; /* Owner's group ID. */
+ abi_uint cuid; /* Creator's user ID. */
+ abi_uint cgid; /* Creator's group ID. */
+ abi_uint mode; /* Read/write permission. */
+ abi_ushort __seq; /* Sequence number. */
+};
+
+struct target_shmid_ds {
+ struct target_ipc_perm shm_perm; /* operation permission struct */
+ abi_long shm_segsz; /* size of segment in bytes */
+ abi_ulong shm_atime; /* time of last shmat() */
+ abi_ulong shm_dtime; /* time of last shmdt() */
+ abi_ulong shm_ctime; /* time of last change by shmctl() */
+ abi_int shm_cpid; /* pid of creator */
+ abi_int shm_lpid; /* pid of last shmop */
+ abi_ushort shm_nattch; /* number of current attaches */
+ abi_ushort shm_unused; /* compatibility */
+ abi_ulong __unused4;
+ abi_ulong __unused5;
+};
+
+#endif
diff --git a/linux-user/tilegx/termbits.h b/linux-user/tilegx/termbits.h
new file mode 100644
index 0000000..91ec236
--- /dev/null
+++ b/linux-user/tilegx/termbits.h
@@ -0,0 +1,274 @@
+#ifndef TILEGX_TERMBITS_H
+#define TILEGX_TERMBITS_H
+
+/* From asm-generic/termbits.h, which is used by tilegx */
+
+#define TARGET_NCCS 19
+struct target_termios {
+ unsigned int c_iflag; /* input mode flags */
+ unsigned int c_oflag; /* output mode flags */
+ unsigned int c_cflag; /* control mode flags */
+ unsigned int c_lflag; /* local mode flags */
+ unsigned char c_line; /* line discipline */
+ unsigned char c_cc[TARGET_NCCS]; /* control characters */
+};
+
+struct target_termios2 {
+ unsigned int c_iflag; /* input mode flags */
+ unsigned int c_oflag; /* output mode flags */
+ unsigned int c_cflag; /* control mode flags */
+ unsigned int c_lflag; /* local mode flags */
+ unsigned char c_line; /* line discipline */
+ unsigned char c_cc[TARGET_NCCS]; /* control characters */
+ unsigned int c_ispeed; /* input speed */
+ unsigned int c_ospeed; /* output speed */
+};
+
+/* c_cc characters */
+#define TARGET_VINTR 0
+#define TARGET_VQUIT 1
+#define TARGET_VERASE 2
+#define TARGET_VKILL 3
+#define TARGET_VEOF 4
+#define TARGET_VTIME 5
+#define TARGET_VMIN 6
+#define TARGET_VSWTC 7
+#define TARGET_VSTART 8
+#define TARGET_VSTOP 9
+#define TARGET_VSUSP 10
+#define TARGET_VEOL 11
+#define TARGET_VREPRINT 12
+#define TARGET_VDISCARD 13
+#define TARGET_VWERASE 14
+#define TARGET_VLNEXT 15
+#define TARGET_VEOL2 16
+
+/* c_iflag bits */
+#define TARGET_IGNBRK 0000001
+#define TARGET_BRKINT 0000002
+#define TARGET_IGNPAR 0000004
+#define TARGET_PARMRK 0000010
+#define TARGET_INPCK 0000020
+#define TARGET_ISTRIP 0000040
+#define TARGET_INLCR 0000100
+#define TARGET_IGNCR 0000200
+#define TARGET_ICRNL 0000400
+#define TARGET_IUCLC 0001000
+#define TARGET_IXON 0002000
+#define TARGET_IXANY 0004000
+#define TARGET_IXOFF 0010000
+#define TARGET_IMAXBEL 0020000
+#define TARGET_IUTF8 0040000
+
+/* c_oflag bits */
+#define TARGET_OPOST 0000001
+#define TARGET_OLCUC 0000002
+#define TARGET_ONLCR 0000004
+#define TARGET_OCRNL 0000010
+#define TARGET_ONOCR 0000020
+#define TARGET_ONLRET 0000040
+#define TARGET_OFILL 0000100
+#define TARGET_OFDEL 0000200
+#define TARGET_NLDLY 0000400
+#define TARGET_NL0 0000000
+#define TARGET_NL1 0000400
+#define TARGET_CRDLY 0003000
+#define TARGET_CR0 0000000
+#define TARGET_CR1 0001000
+#define TARGET_CR2 0002000
+#define TARGET_CR3 0003000
+#define TARGET_TABDLY 0014000
+#define TARGET_TAB0 0000000
+#define TARGET_TAB1 0004000
+#define TARGET_TAB2 0010000
+#define TARGET_TAB3 0014000
+#define TARGET_XTABS 0014000
+#define TARGET_BSDLY 0020000
+#define TARGET_BS0 0000000
+#define TARGET_BS1 0020000
+#define TARGET_VTDLY 0040000
+#define TARGET_VT0 0000000
+#define TARGET_VT1 0040000
+#define TARGET_FFDLY 0100000
+#define TARGET_FF0 0000000
+#define TARGET_FF1 0100000
+
+/* c_cflag bit meaning */
+#define TARGET_CBAUD 0010017
+#define TARGET_B0 0000000 /* hang up */
+#define TARGET_B50 0000001
+#define TARGET_B75 0000002
+#define TARGET_B110 0000003
+#define TARGET_B134 0000004
+#define TARGET_B150 0000005
+#define TARGET_B200 0000006
+#define TARGET_B300 0000007
+#define TARGET_B600 0000010
+#define TARGET_B1200 0000011
+#define TARGET_B1800 0000012
+#define TARGET_B2400 0000013
+#define TARGET_B4800 0000014
+#define TARGET_B9600 0000015
+#define TARGET_B19200 0000016
+#define TARGET_B38400 0000017
+#define TARGET_EXTA TARGET_B19200
+#define TARGET_EXTB TARGET_B38400
+#define TARGET_CSIZE 0000060
+#define TARGET_CS5 0000000
+#define TARGET_CS6 0000020
+#define TARGET_CS7 0000040
+#define TARGET_CS8 0000060
+#define TARGET_CSTOPB 0000100
+#define TARGET_CREAD 0000200
+#define TARGET_PARENB 0000400
+#define TARGET_PARODD 0001000
+#define TARGET_HUPCL 0002000
+#define TARGET_CLOCAL 0004000
+#define TARGET_CBAUDEX 0010000
+#define TARGET_BOTHER 0010000
+#define TARGET_B57600 0010001
+#define TARGET_B115200 0010002
+#define TARGET_B230400 0010003
+#define TARGET_B460800 0010004
+#define TARGET_B500000 0010005
+#define TARGET_B576000 0010006
+#define TARGET_B921600 0010007
+#define TARGET_B1000000 0010010
+#define TARGET_B1152000 0010011
+#define TARGET_B1500000 0010012
+#define TARGET_B2000000 0010013
+#define TARGET_B2500000 0010014
+#define TARGET_B3000000 0010015
+#define TARGET_B3500000 0010016
+#define TARGET_B4000000 0010017
+#define TARGET_CIBAUD 002003600000 /* input baud rate */
+#define TARGET_CMSPAR 010000000000 /* mark or space (stick) parity */
+#define TARGET_CRTSCTS 020000000000 /* flow control */
+
+#define TARGET_IBSHIFT 16 /* Shift from CBAUD to CIBAUD */
+
+/* c_lflag bits */
+#define TARGET_ISIG 0000001
+#define TARGET_ICANON 0000002
+#define TARGET_XCASE 0000004
+#define TARGET_ECHO 0000010
+#define TARGET_ECHOE 0000020
+#define TARGET_ECHOK 0000040
+#define TARGET_ECHONL 0000100
+#define TARGET_NOFLSH 0000200
+#define TARGET_TOSTOP 0000400
+#define TARGET_ECHOCTL 0001000
+#define TARGET_ECHOPRT 0002000
+#define TARGET_ECHOKE 0004000
+#define TARGET_FLUSHO 0010000
+#define TARGET_PENDIN 0040000
+#define TARGET_IEXTEN 0100000
+#define TARGET_EXTPROC 0200000
+
+/* tcflow() and TCXONC use these */
+#define TARGET_TCOOFF 0
+#define TARGET_TCOON 1
+#define TARGET_TCIOFF 2
+#define TARGET_TCION 3
+
+/* tcflush() and TCFLSH use these */
+#define TARGET_TCIFLUSH 0
+#define TARGET_TCOFLUSH 1
+#define TARGET_TCIOFLUSH 2
+
+/* tcsetattr uses these */
+#define TARGET_TCSANOW 0
+#define TARGET_TCSADRAIN 1
+#define TARGET_TCSAFLUSH 2
+
+/* From asm-generic/ioctls.h, which is used by tilegx */
+
+#define TARGET_TCGETS 0x5401
+#define TARGET_TCSETS 0x5402
+#define TARGET_TCSETSW 0x5403
+#define TARGET_TCSETSF 0x5404
+#define TARGET_TCGETA 0x5405
+#define TARGET_TCSETA 0x5406
+#define TARGET_TCSETAW 0x5407
+#define TARGET_TCSETAF 0x5408
+#define TARGET_TCSBRK 0x5409
+#define TARGET_TCXONC 0x540A
+#define TARGET_TCFLSH 0x540B
+#define TARGET_TIOCEXCL 0x540C
+#define TARGET_TIOCNXCL 0x540D
+#define TARGET_TIOCSCTTY 0x540E
+#define TARGET_TIOCGPGRP 0x540F
+#define TARGET_TIOCSPGRP 0x5410
+#define TARGET_TIOCOUTQ 0x5411
+#define TARGET_TIOCSTI 0x5412
+#define TARGET_TIOCGWINSZ 0x5413
+#define TARGET_TIOCSWINSZ 0x5414
+#define TARGET_TIOCMGET 0x5415
+#define TARGET_TIOCMBIS 0x5416
+#define TARGET_TIOCMBIC 0x5417
+#define TARGET_TIOCMSET 0x5418
+#define TARGET_TIOCGSOFTCAR 0x5419
+#define TARGET_TIOCSSOFTCAR 0x541A
+#define TARGET_FIONREAD 0x541B
+#define TARGET_TIOCINQ TARGET_FIONREAD
+#define TARGET_TIOCLINUX 0x541C
+#define TARGET_TIOCCONS 0x541D
+#define TARGET_TIOCGSERIAL 0x541E
+#define TARGET_TIOCSSERIAL 0x541F
+#define TARGET_TIOCPKT 0x5420
+#define TARGET_FIONBIO 0x5421
+#define TARGET_TIOCNOTTY 0x5422
+#define TARGET_TIOCSETD 0x5423
+#define TARGET_TIOCGETD 0x5424
+#define TARGET_TCSBRKP 0x5425
+#define TARGET_TIOCSBRK 0x5427
+#define TARGET_TIOCCBRK 0x5428
+#define TARGET_TIOCGSID 0x5429
+#define TARGET_TCGETS2 TARGET_IOR('T', 0x2A, struct termios2)
+#define TARGET_TCSETS2 TARGET_IOW('T', 0x2B, struct termios2)
+#define TARGET_TCSETSW2 TARGET_IOW('T', 0x2C, struct termios2)
+#define TARGET_TCSETSF2 TARGET_IOW('T', 0x2D, struct termios2)
+#define TARGET_TIOCGRS485 0x542E
+#define TARGET_TIOCSRS485 0x542F
+#define TARGET_TIOCGPTN TARGET_IOR('T', 0x30, unsigned int)
+#define TARGET_TIOCSPTLCK TARGET_IOW('T', 0x31, int)
+#define TARGET_TIOCGDEV TARGET_IOR('T', 0x32, unsigned int)
+#define TARGET_TCGETX 0x5432
+#define TARGET_TCSETX 0x5433
+#define TARGET_TCSETXF 0x5434
+#define TARGET_TCSETXW 0x5435
+#define TARGET_TIOCSIG TARGET_IOW('T', 0x36, int)
+#define TARGET_TIOCVHANGUP 0x5437
+#define TARGET_TIOCGPKT TARGET_IOR('T', 0x38, int)
+#define TARGET_TIOCGPTLCK TARGET_IOR('T', 0x39, int)
+#define TARGET_TIOCGEXCL TARGET_IOR('T', 0x40, int)
+
+#define TARGET_FIONCLEX 0x5450
+#define TARGET_FIOCLEX 0x5451
+#define TARGET_FIOASYNC 0x5452
+#define TARGET_TIOCSERCONFIG 0x5453
+#define TARGET_TIOCSERGWILD 0x5454
+#define TARGET_TIOCSERSWILD 0x5455
+#define TARGET_TIOCGLCKTRMIOS 0x5456
+#define TARGET_TIOCSLCKTRMIOS 0x5457
+#define TARGET_TIOCSERGSTRUCT 0x5458
+#define TARGET_TIOCSERGETLSR 0x5459
+#define TARGET_TIOCSERGETMULTI 0x545A
+#define TARGET_TIOCSERSETMULTI 0x545B
+
+#define TARGET_TIOCMIWAIT 0x545C
+#define TARGET_TIOCGICOUNT 0x545D
+#define TARGET_FIOQSIZE 0x5460
+
+#define TARGET_TIOCPKT_DATA 0
+#define TARGET_TIOCPKT_FLUSHREAD 1
+#define TARGET_TIOCPKT_FLUSHWRITE 2
+#define TARGET_TIOCPKT_STOP 4
+#define TARGET_TIOCPKT_START 8
+#define TARGET_TIOCPKT_NOSTOP 16
+#define TARGET_TIOCPKT_DOSTOP 32
+#define TARGET_TIOCPKT_IOCTL 64
+
+#define TARGET_TIOCSER_TEMT 0x01
+
+#endif
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Qemu-devel] [PATCH 02/10 v12] linux-user: Support tilegx architecture in linux-user
2015-06-13 13:07 [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Chen Gang
2015-06-13 13:08 ` [Qemu-devel] [PATCH 01/10 v12] linux-user: tilegx: Firstly add architecture related features Chen Gang
@ 2015-06-13 13:10 ` Chen Gang
2015-07-19 11:31 ` Chen Gang
2015-06-13 13:13 ` [Qemu-devel] [PATCH 03/10 v12] linux-user/syscall.c: conditionally define syscalls which are not defined in tilegx Chen Gang
` (8 subsequent siblings)
10 siblings, 1 reply; 21+ messages in thread
From: Chen Gang @ 2015-06-13 13:10 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
Add main working flow feature, system call processing feature, and elf64
tilegx binary loading feature, based on Linux kernel tilegx 64-bit
implementation.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
include/elf.h | 2 +
linux-user/elfload.c | 23 ++++
linux-user/main.c | 295 ++++++++++++++++++++++++++++++++++++++++++++++
linux-user/syscall_defs.h | 14 ++-
4 files changed, 329 insertions(+), 5 deletions(-)
diff --git a/include/elf.h b/include/elf.h
index 4afd474..79859f0 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -133,6 +133,8 @@ typedef int64_t Elf64_Sxword;
#define EM_AARCH64 183
+#define EM_TILEGX 191 /* TILE-Gx */
+
/* This is the info that is needed to parse the dynamic section of the file */
#define DT_NULL 0
#define DT_NEEDED 1
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index b71e866..12d79f1 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -1218,6 +1218,29 @@ static inline void init_thread(struct target_pt_regs *regs, struct image_info *i
#endif /* TARGET_S390X */
+#ifdef TARGET_TILEGX
+
+/* 42 bits real used address, a half for user mode */
+#define ELF_START_MMAP (0x00000020000000000ULL)
+
+#define elf_check_arch(x) ((x) == EM_TILEGX)
+
+#define ELF_CLASS ELFCLASS64
+#define ELF_DATA ELFDATA2LSB
+#define ELF_ARCH EM_TILEGX
+
+static inline void init_thread(struct target_pt_regs *regs,
+ struct image_info *infop)
+{
+ regs->pc = infop->entry;
+ regs->sp = infop->start_stack;
+
+}
+
+#define ELF_EXEC_PAGESIZE 65536 /* TILE-Gx page size is 64KB */
+
+#endif /* TARGET_TILEGX */
+
#ifndef ELF_PLATFORM
#define ELF_PLATFORM (NULL)
#endif
diff --git a/linux-user/main.c b/linux-user/main.c
index a0d3e58..0a44b38 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -3412,6 +3412,290 @@ void cpu_loop(CPUS390XState *env)
#endif /* TARGET_S390X */
+#ifdef TARGET_TILEGX
+
+static void gen_sigsegv_mapper(CPUTLGState *env, target_ulong addr)
+{
+ target_siginfo_t info;
+
+ info.si_signo = TARGET_SIGSEGV;
+ info.si_errno = 0;
+ info.si_code = TARGET_SEGV_MAPERR;
+ info._sifields._sigfault._addr = addr;
+ queue_signal(env, info.si_signo, &info);
+}
+
+static void gen_sigill_reg(CPUTLGState *env)
+{
+ target_siginfo_t info;
+
+ info.si_signo = TARGET_SIGILL;
+ info.si_errno = 0;
+ info.si_code = TARGET_ILL_PRVREG;
+ info._sifields._sigfault._addr = env->pc;
+ queue_signal(env, info.si_signo, &info);
+}
+
+static int get_regval(CPUTLGState *env, uint8_t reg, target_ulong *val)
+{
+ if (likely(reg < TILEGX_R_COUNT)) {
+ *val = env->regs[reg];
+ return 0;
+ }
+
+ switch (reg) {
+ case TILEGX_R_SN:
+ case TILEGX_R_ZERO:
+ *val = 0;
+ return 0;
+ case TILEGX_R_IDN0:
+ case TILEGX_R_IDN1:
+ case TILEGX_R_UDN0:
+ case TILEGX_R_UDN1:
+ case TILEGX_R_UDN2:
+ case TILEGX_R_UDN3:
+ return -1;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static int set_regval(CPUTLGState *env, uint8_t reg, uint64_t val)
+{
+ if (unlikely(reg >= TILEGX_R_COUNT)) {
+ switch (reg) {
+ case TILEGX_R_SN:
+ case TILEGX_R_ZERO:
+ return 0;
+ case TILEGX_R_IDN0:
+ case TILEGX_R_IDN1:
+ case TILEGX_R_UDN0:
+ case TILEGX_R_UDN1:
+ case TILEGX_R_UDN2:
+ case TILEGX_R_UDN3:
+ return -1;
+ default:
+ g_assert_not_reached();
+ }
+ }
+
+ env->regs[reg] = val;
+ return 0;
+}
+
+/*
+ * Compare the 8-byte contents of the CmpValue SPR with the 8-byte value in
+ * memory at the address held in the first source register. If the values are
+ * not equal, then no memory operation is performed. If the values are equal,
+ * the 8-byte quantity from the second source register is written into memory
+ * at the address held in the first source register. In either case, the result
+ * of the instruction is the value read from memory. The compare and write to
+ * memory are atomic and thus can be used for synchronization purposes. This
+ * instruction only operates for addresses aligned to a 8-byte boundary.
+ * Unaligned memory access causes an Unaligned Data Reference interrupt.
+ *
+ * Functional Description (64-bit)
+ * uint64_t memVal = memoryReadDoubleWord (rf[SrcA]);
+ * rf[Dest] = memVal;
+ * if (memVal == SPR[CmpValueSPR])
+ * memoryWriteDoubleWord (rf[SrcA], rf[SrcB]);
+ *
+ * Functional Description (32-bit)
+ * uint64_t memVal = signExtend32 (memoryReadWord (rf[SrcA]));
+ * rf[Dest] = memVal;
+ * if (memVal == signExtend32 (SPR[CmpValueSPR]))
+ * memoryWriteWord (rf[SrcA], rf[SrcB]);
+ *
+ *
+ * This function also processes exch and exch4 which need not process SPR.
+ */
+static void do_exch(CPUTLGState *env, bool quad, bool cmp)
+{
+ uint8_t rdst, rsrc, rsrcb;
+ target_ulong addr;
+ target_long val, sprval;
+
+ start_exclusive();
+
+ rdst = extract32(env->excparam, 16, 8);
+ rsrc = extract32(env->excparam, 8, 8);
+ rsrcb = extract32(env->excparam, 0, 8);
+
+ if (get_regval(env, rsrc, &addr)) {
+ goto sigill_reg;
+ }
+ if (quad ? get_user_s64(val, addr) : get_user_s32(val, addr)) {
+ goto sigsegv_mapper;
+ }
+
+ if (cmp) {
+ if (quad) {
+ sprval = env->spregs[TILEGX_SPR_CMPEXCH];
+ } else {
+ sprval = sextract64(env->spregs[TILEGX_SPR_CMPEXCH], 0, 32);
+ }
+ }
+
+ if (!cmp || val == sprval) {
+ target_long valb;
+
+ if (get_regval(env, rsrcb, (target_ulong *)&valb)) {
+ goto sigill_reg;
+ }
+ if (quad ? put_user_u64(valb, addr) : put_user_u32(valb, addr)) {
+ goto sigsegv_mapper;
+ }
+ }
+
+ if (set_regval(env, rdst, val)) {
+ goto sigill_reg;
+ }
+ end_exclusive();
+ return;
+
+sigill_reg:
+ end_exclusive();
+ gen_sigill_reg(env);
+ return;
+
+sigsegv_mapper:
+ end_exclusive();
+ gen_sigsegv_mapper(env, addr);
+}
+
+static void do_fetch(CPUTLGState *env, int trapnr, bool quad)
+{
+ uint8_t rdst, rsrc, rsrcb;
+ int8_t write = 1;
+ target_ulong addr;
+ target_long val, valb;
+
+ start_exclusive();
+
+ rdst = extract32(env->excparam, 16, 8);
+ rsrc = extract32(env->excparam, 8, 8);
+ rsrcb = extract32(env->excparam, 0, 8);
+
+
+ if (get_regval(env, rsrc, &addr)) {
+ goto sigill_reg;
+ }
+ if (quad ? get_user_s64(val, addr) : get_user_s32(val, addr)) {
+ goto sigsegv_mapper;
+ }
+
+ if (get_regval(env, rsrcb, (target_ulong *)&valb)) {
+ goto sigill_reg;
+ }
+ switch (trapnr) {
+ case TILEGX_EXCP_OPCODE_FETCHADD:
+ case TILEGX_EXCP_OPCODE_FETCHADD4:
+ valb += val;
+ break;
+ case TILEGX_EXCP_OPCODE_FETCHADDGEZ:
+ valb += val;
+ if (valb < 0) {
+ write = 0;
+ }
+ break;
+ case TILEGX_EXCP_OPCODE_FETCHADDGEZ4:
+ valb += val;
+ if ((int32_t)valb < 0) {
+ write = 0;
+ }
+ break;
+ case TILEGX_EXCP_OPCODE_FETCHAND:
+ case TILEGX_EXCP_OPCODE_FETCHAND4:
+ valb &= val;
+ break;
+ case TILEGX_EXCP_OPCODE_FETCHOR:
+ case TILEGX_EXCP_OPCODE_FETCHOR4:
+ valb |= val;
+ break;
+ default:
+ g_assert_not_reached();
+ }
+
+ if (write) {
+ if (quad ? put_user_u64(valb, addr) : put_user_u32(valb, addr)) {
+ goto sigsegv_mapper;
+ }
+ }
+
+ if (set_regval(env, rdst, val)) {
+ goto sigill_reg;
+ }
+ end_exclusive();
+ return;
+
+sigill_reg:
+ end_exclusive();
+ gen_sigill_reg(env);
+ return;
+
+sigsegv_mapper:
+ end_exclusive();
+ gen_sigsegv_mapper(env, addr);
+}
+
+void cpu_loop(CPUTLGState *env)
+{
+ CPUState *cs = CPU(tilegx_env_get_cpu(env));
+ int trapnr;
+
+ while (1) {
+ cpu_exec_start(cs);
+ trapnr = cpu_tilegx_exec(env);
+ cpu_exec_end(cs);
+ switch (trapnr) {
+ case TILEGX_EXCP_SYSCALL:
+ env->regs[TILEGX_R_RE] = do_syscall(env, env->regs[TILEGX_R_NR],
+ env->regs[0], env->regs[1],
+ env->regs[2], env->regs[3],
+ env->regs[4], env->regs[5],
+ env->regs[6], env->regs[7]);
+ env->regs[TILEGX_R_ERR] = TILEGX_IS_ERRNO(env->regs[TILEGX_R_RE])
+ ? env->regs[TILEGX_R_RE]
+ : 0;
+ break;
+ case TILEGX_EXCP_OPCODE_EXCH:
+ do_exch(env, true, false);
+ break;
+ case TILEGX_EXCP_OPCODE_EXCH4:
+ do_exch(env, false, false);
+ break;
+ case TILEGX_EXCP_OPCODE_CMPEXCH:
+ do_exch(env, true, true);
+ break;
+ case TILEGX_EXCP_OPCODE_CMPEXCH4:
+ do_exch(env, false, true);
+ break;
+ case TILEGX_EXCP_OPCODE_FETCHADD:
+ case TILEGX_EXCP_OPCODE_FETCHADDGEZ:
+ case TILEGX_EXCP_OPCODE_FETCHAND:
+ case TILEGX_EXCP_OPCODE_FETCHOR:
+ do_fetch(env, trapnr, true);
+ break;
+ case TILEGX_EXCP_OPCODE_FETCHADD4:
+ case TILEGX_EXCP_OPCODE_FETCHADDGEZ4:
+ case TILEGX_EXCP_OPCODE_FETCHAND4:
+ case TILEGX_EXCP_OPCODE_FETCHOR4:
+ do_fetch(env, trapnr, false);
+ break;
+ case TILEGX_EXCP_REG_IDN_ACCESS:
+ case TILEGX_EXCP_REG_UDN_ACCESS:
+ gen_sigill_reg(env);
+ break;
+ default:
+ fprintf(stderr, "trapnr is %d[0x%x].\n", trapnr, trapnr);
+ g_assert_not_reached();
+ }
+ process_pending_signals(env);
+ }
+}
+
+#endif
+
THREAD CPUState *thread_cpu;
void task_settid(TaskState *ts)
@@ -4387,6 +4671,17 @@ int main(int argc, char **argv, char **envp)
env->psw.mask = regs->psw.mask;
env->psw.addr = regs->psw.addr;
}
+#elif defined(TARGET_TILEGX)
+ {
+ int i;
+ for (i = 0; i < TILEGX_R_COUNT; i++) {
+ env->regs[i] = regs->regs[i];
+ }
+ for (i = 0; i < TILEGX_SPR_COUNT; i++) {
+ env->spregs[i] = 0;
+ }
+ env->pc = regs->pc;
+ }
#else
#error unsupported target CPU
#endif
diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
index edd5f3c..e6af073 100644
--- a/linux-user/syscall_defs.h
+++ b/linux-user/syscall_defs.h
@@ -64,8 +64,9 @@
#endif
#if defined(TARGET_I386) || defined(TARGET_ARM) || defined(TARGET_SH4) \
- || defined(TARGET_M68K) || defined(TARGET_CRIS) || defined(TARGET_UNICORE32) \
- || defined(TARGET_S390X) || defined(TARGET_OPENRISC)
+ || defined(TARGET_M68K) || defined(TARGET_CRIS) \
+ || defined(TARGET_UNICORE32) || defined(TARGET_S390X) \
+ || defined(TARGET_OPENRISC) || defined(TARGET_TILEGX)
#define TARGET_IOC_SIZEBITS 14
#define TARGET_IOC_DIRBITS 2
@@ -365,7 +366,8 @@ int do_sigaction(int sig, const struct target_sigaction *act,
|| defined(TARGET_PPC) || defined(TARGET_MIPS) || defined(TARGET_SH4) \
|| defined(TARGET_M68K) || defined(TARGET_ALPHA) || defined(TARGET_CRIS) \
|| defined(TARGET_MICROBLAZE) || defined(TARGET_UNICORE32) \
- || defined(TARGET_S390X) || defined(TARGET_OPENRISC)
+ || defined(TARGET_S390X) || defined(TARGET_OPENRISC) \
+ || defined(TARGET_TILEGX)
#if defined(TARGET_SPARC)
#define TARGET_SA_NOCLDSTOP 8u
@@ -1871,7 +1873,7 @@ struct target_stat {
abi_ulong target_st_ctime_nsec;
unsigned int __unused[2];
};
-#elif defined(TARGET_OPENRISC)
+#elif defined(TARGET_OPENRISC) || defined(TARGET_TILEGX)
/* These are the asm-generic versions of the stat and stat64 structures */
@@ -2264,7 +2266,9 @@ struct target_flock {
struct target_flock64 {
short l_type;
short l_whence;
-#if defined(TARGET_PPC) || defined(TARGET_X86_64) || defined(TARGET_MIPS) || defined(TARGET_SPARC) || defined(TARGET_HPPA) || defined (TARGET_MICROBLAZE)
+#if defined(TARGET_PPC) || defined(TARGET_X86_64) || defined(TARGET_MIPS) \
+ || defined(TARGET_SPARC) || defined(TARGET_HPPA) \
+ || defined(TARGET_MICROBLAZE) || defined(TARGET_TILEGX)
int __pad;
#endif
unsigned long long l_start;
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Qemu-devel] [PATCH 03/10 v12] linux-user/syscall.c: conditionally define syscalls which are not defined in tilegx
2015-06-13 13:07 [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Chen Gang
2015-06-13 13:08 ` [Qemu-devel] [PATCH 01/10 v12] linux-user: tilegx: Firstly add architecture related features Chen Gang
2015-06-13 13:10 ` [Qemu-devel] [PATCH 02/10 v12] linux-user: Support tilegx architecture in linux-user Chen Gang
@ 2015-06-13 13:13 ` Chen Gang
2015-06-13 13:14 ` [Qemu-devel] [PATCH 04/10 v12] target-tilegx: Add opcode basic implementation from Tilera Corporation Chen Gang
` (7 subsequent siblings)
10 siblings, 0 replies; 21+ messages in thread
From: Chen Gang @ 2015-06-13 13:13 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
Some of architectures (e.g. tilegx), several syscall macros are not
supported, so switch them.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
linux-user/syscall.c | 50 +++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 49 insertions(+), 1 deletion(-)
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index 1622ad6..a503673 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -213,7 +213,7 @@ static int gettid(void) {
return -ENOSYS;
}
#endif
-#ifdef __NR_getdents
+#if defined(TARGET_NR_getdents) && defined(__NR_getdents)
_syscall3(int, sys_getdents, uint, fd, struct linux_dirent *, dirp, uint, count);
#endif
#if !defined(__NR_getdents) || \
@@ -5581,6 +5581,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
ret = get_errno(write(arg1, p, arg3));
unlock_user(p, arg2, 0);
break;
+#ifdef TARGET_NR_open
case TARGET_NR_open:
if (!(p = lock_user_string(arg1)))
goto efault;
@@ -5589,6 +5590,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
arg3));
unlock_user(p, arg1, 0);
break;
+#endif
case TARGET_NR_openat:
if (!(p = lock_user_string(arg2)))
goto efault;
@@ -5603,9 +5605,11 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
case TARGET_NR_brk:
ret = do_brk(arg1);
break;
+#ifdef TARGET_NR_fork
case TARGET_NR_fork:
ret = get_errno(do_fork(cpu_env, SIGCHLD, 0, 0, 0, 0));
break;
+#endif
#ifdef TARGET_NR_waitpid
case TARGET_NR_waitpid:
{
@@ -5640,6 +5644,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg1, 0);
break;
#endif
+#ifdef TARGET_NR_link
case TARGET_NR_link:
{
void * p2;
@@ -5653,6 +5658,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg1, 0);
}
break;
+#endif
#if defined(TARGET_NR_linkat)
case TARGET_NR_linkat:
{
@@ -5670,12 +5676,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_unlink
case TARGET_NR_unlink:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(unlink(p));
unlock_user(p, arg1, 0);
break;
+#endif
#if defined(TARGET_NR_unlinkat)
case TARGET_NR_unlinkat:
if (!(p = lock_user_string(arg2)))
@@ -5792,12 +5800,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_mknod
case TARGET_NR_mknod:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(mknod(p, arg2, arg3));
unlock_user(p, arg1, 0);
break;
+#endif
#if defined(TARGET_NR_mknodat)
case TARGET_NR_mknodat:
if (!(p = lock_user_string(arg2)))
@@ -5806,12 +5816,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg2, 0);
break;
#endif
+#ifdef TARGET_NR_chmod
case TARGET_NR_chmod:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(chmod(p, arg2));
unlock_user(p, arg1, 0);
break;
+#endif
#ifdef TARGET_NR_break
case TARGET_NR_break:
goto unimplemented;
@@ -5946,6 +5958,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_utimes
case TARGET_NR_utimes:
{
struct timeval *tvp, tv[2];
@@ -5964,6 +5977,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg1, 0);
}
break;
+#endif
#if defined(TARGET_NR_futimesat)
case TARGET_NR_futimesat:
{
@@ -5992,12 +6006,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
case TARGET_NR_gtty:
goto unimplemented;
#endif
+#ifdef TARGET_NR_access
case TARGET_NR_access:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(access(path(p), arg2));
unlock_user(p, arg1, 0);
break;
+#endif
#if defined(TARGET_NR_faccessat) && defined(__NR_faccessat)
case TARGET_NR_faccessat:
if (!(p = lock_user_string(arg2)))
@@ -6022,6 +6038,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
case TARGET_NR_kill:
ret = get_errno(kill(arg1, target_to_host_signal(arg2)));
break;
+#ifdef TARGET_NR_rename
case TARGET_NR_rename:
{
void *p2;
@@ -6035,6 +6052,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg1, 0);
}
break;
+#endif
#if defined(TARGET_NR_renameat)
case TARGET_NR_renameat:
{
@@ -6050,12 +6068,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_mkdir
case TARGET_NR_mkdir:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(mkdir(p, arg2));
unlock_user(p, arg1, 0);
break;
+#endif
#if defined(TARGET_NR_mkdirat)
case TARGET_NR_mkdirat:
if (!(p = lock_user_string(arg2)))
@@ -6064,18 +6084,22 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg2, 0);
break;
#endif
+#ifdef TARGET_NR_rmdir
case TARGET_NR_rmdir:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(rmdir(p));
unlock_user(p, arg1, 0);
break;
+#endif
case TARGET_NR_dup:
ret = get_errno(dup(arg1));
break;
+#ifdef TARGET_NR_pipe
case TARGET_NR_pipe:
ret = do_pipe(cpu_env, arg1, 0, 0);
break;
+#endif
#ifdef TARGET_NR_pipe2
case TARGET_NR_pipe2:
ret = do_pipe(cpu_env, arg1,
@@ -6160,11 +6184,15 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
ret = get_errno(chroot(p));
unlock_user(p, arg1, 0);
break;
+#ifdef TARGET_NR_ustat
case TARGET_NR_ustat:
goto unimplemented;
+#endif
+#ifdef TARGET_NR_dup2
case TARGET_NR_dup2:
ret = get_errno(dup2(arg1, arg2));
break;
+#endif
#if defined(CONFIG_DUP3) && defined(TARGET_NR_dup3)
case TARGET_NR_dup3:
ret = get_errno(dup3(arg1, arg2, arg3));
@@ -6175,9 +6203,11 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
ret = get_errno(getppid());
break;
#endif
+#ifdef TARGET_NR_getpgrp
case TARGET_NR_getpgrp:
ret = get_errno(getpgrp());
break;
+#endif
case TARGET_NR_setsid:
ret = get_errno(setsid());
break;
@@ -6753,6 +6783,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_symlink
case TARGET_NR_symlink:
{
void *p2;
@@ -6766,6 +6797,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg1, 0);
}
break;
+#endif
#if defined(TARGET_NR_symlinkat)
case TARGET_NR_symlinkat:
{
@@ -6785,6 +6817,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
case TARGET_NR_oldlstat:
goto unimplemented;
#endif
+#ifdef TARGET_NR_readlink
case TARGET_NR_readlink:
{
void *p2;
@@ -6815,6 +6848,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg1, 0);
}
break;
+#endif
#if defined(TARGET_NR_readlinkat)
case TARGET_NR_readlinkat:
{
@@ -7214,22 +7248,28 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
}
break;
+#ifdef TARGET_NR_stat
case TARGET_NR_stat:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(stat(path(p), &st));
unlock_user(p, arg1, 0);
goto do_stat;
+#endif
+#ifdef TARGET_NR_lstat
case TARGET_NR_lstat:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(lstat(path(p), &st));
unlock_user(p, arg1, 0);
goto do_stat;
+#endif
case TARGET_NR_fstat:
{
ret = get_errno(fstat(arg1, &st));
+#if defined(TARGET_NR_stat) || defined(TARGET_NR_lstat)
do_stat:
+#endif
if (!is_error(ret)) {
struct target_stat *target_st;
@@ -7517,6 +7557,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_getdents
case TARGET_NR_getdents:
#ifdef __NR_getdents
#if TARGET_ABI_BITS == 32 && HOST_LONG_BITS == 64
@@ -7647,6 +7688,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
#endif
break;
+#endif /* TARGET_NR_getdents */
#if defined(TARGET_NR_getdents64) && defined(__NR_getdents64)
case TARGET_NR_getdents64:
{
@@ -7786,11 +7828,13 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
ret = get_errno(fdatasync(arg1));
break;
#endif
+#ifdef TARGET_NR__sysctl
case TARGET_NR__sysctl:
/* We don't implement this, but ENOTDIR is always a safe
return value. */
ret = -TARGET_ENOTDIR;
break;
+#endif
case TARGET_NR_sched_getaffinity:
{
unsigned int mask_size;
@@ -8237,12 +8281,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
ret = host_to_target_stat64(cpu_env, arg3, &st);
break;
#endif
+#ifdef TARGET_NR_lchown
case TARGET_NR_lchown:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(lchown(p, low2highuid(arg2), low2highgid(arg3)));
unlock_user(p, arg1, 0);
break;
+#endif
#ifdef TARGET_NR_getuid
case TARGET_NR_getuid:
ret = get_errno(high2lowuid(getuid()));
@@ -8365,12 +8411,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_chown
case TARGET_NR_chown:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(chown(p, low2highuid(arg2), low2highgid(arg3)));
unlock_user(p, arg1, 0);
break;
+#endif
case TARGET_NR_setuid:
ret = get_errno(setuid(low2highuid(arg1)));
break;
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Qemu-devel] [PATCH 04/10 v12] target-tilegx: Add opcode basic implementation from Tilera Corporation
2015-06-13 13:07 [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (2 preceding siblings ...)
2015-06-13 13:13 ` [Qemu-devel] [PATCH 03/10 v12] linux-user/syscall.c: conditionally define syscalls which are not defined in tilegx Chen Gang
@ 2015-06-13 13:14 ` Chen Gang
2015-06-13 13:15 ` [Qemu-devel] [PATCH 05/10 v12] target-tilegx/opcode_tilegx.h: Modify it to fit QEMU usage Chen Gang
` (6 subsequent siblings)
10 siblings, 0 replies; 21+ messages in thread
From: Chen Gang @ 2015-06-13 13:14 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
It is copied from Linux kernel "arch/tile/include/uapi/arch/
opcode_tilegx.h".
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
target-tilegx/opcode_tilegx.h | 1406 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 1406 insertions(+)
create mode 100644 target-tilegx/opcode_tilegx.h
diff --git a/target-tilegx/opcode_tilegx.h b/target-tilegx/opcode_tilegx.h
new file mode 100644
index 0000000..d76ff2d
--- /dev/null
+++ b/target-tilegx/opcode_tilegx.h
@@ -0,0 +1,1406 @@
+/* TILE-Gx opcode information.
+ *
+ * Copyright 2011 Tilera Corporation. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ * NON INFRINGEMENT. See the GNU General Public License for
+ * more details.
+ *
+ *
+ *
+ *
+ *
+ */
+
+#ifndef __ARCH_OPCODE_H__
+#define __ARCH_OPCODE_H__
+
+#ifndef __ASSEMBLER__
+
+typedef unsigned long long tilegx_bundle_bits;
+
+/* These are the bits that determine if a bundle is in the X encoding. */
+#define TILEGX_BUNDLE_MODE_MASK ((tilegx_bundle_bits)3 << 62)
+
+enum
+{
+ /* Maximum number of instructions in a bundle (2 for X, 3 for Y). */
+ TILEGX_MAX_INSTRUCTIONS_PER_BUNDLE = 3,
+
+ /* How many different pipeline encodings are there? X0, X1, Y0, Y1, Y2. */
+ TILEGX_NUM_PIPELINE_ENCODINGS = 5,
+
+ /* Log base 2 of TILEGX_BUNDLE_SIZE_IN_BYTES. */
+ TILEGX_LOG2_BUNDLE_SIZE_IN_BYTES = 3,
+
+ /* Instructions take this many bytes. */
+ TILEGX_BUNDLE_SIZE_IN_BYTES = 1 << TILEGX_LOG2_BUNDLE_SIZE_IN_BYTES,
+
+ /* Log base 2 of TILEGX_BUNDLE_ALIGNMENT_IN_BYTES. */
+ TILEGX_LOG2_BUNDLE_ALIGNMENT_IN_BYTES = 3,
+
+ /* Bundles should be aligned modulo this number of bytes. */
+ TILEGX_BUNDLE_ALIGNMENT_IN_BYTES =
+ (1 << TILEGX_LOG2_BUNDLE_ALIGNMENT_IN_BYTES),
+
+ /* Number of registers (some are magic, such as network I/O). */
+ TILEGX_NUM_REGISTERS = 64,
+};
+
+/* Make a few "tile_" variables to simplify common code between
+ architectures. */
+
+typedef tilegx_bundle_bits tile_bundle_bits;
+#define TILE_BUNDLE_SIZE_IN_BYTES TILEGX_BUNDLE_SIZE_IN_BYTES
+#define TILE_BUNDLE_ALIGNMENT_IN_BYTES TILEGX_BUNDLE_ALIGNMENT_IN_BYTES
+#define TILE_LOG2_BUNDLE_ALIGNMENT_IN_BYTES \
+ TILEGX_LOG2_BUNDLE_ALIGNMENT_IN_BYTES
+#define TILE_BPT_BUNDLE TILEGX_BPT_BUNDLE
+
+/* 64-bit pattern for a { bpt ; nop } bundle. */
+#define TILEGX_BPT_BUNDLE 0x286a44ae51485000ULL
+
+static __inline unsigned int
+get_BFEnd_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_BFOpcodeExtension_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 24)) & 0xf);
+}
+
+static __inline unsigned int
+get_BFStart_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 18)) & 0x3f);
+}
+
+static __inline unsigned int
+get_BrOff_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 31)) & 0x0000003f) |
+ (((unsigned int)(n >> 37)) & 0x0001ffc0);
+}
+
+static __inline unsigned int
+get_BrType_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 54)) & 0x1f);
+}
+
+static __inline unsigned int
+get_Dest_Imm8_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 31)) & 0x0000003f) |
+ (((unsigned int)(n >> 43)) & 0x000000c0);
+}
+
+static __inline unsigned int
+get_Dest_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 0)) & 0x3f);
+}
+
+static __inline unsigned int
+get_Dest_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 31)) & 0x3f);
+}
+
+static __inline unsigned int
+get_Dest_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 0)) & 0x3f);
+}
+
+static __inline unsigned int
+get_Dest_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 31)) & 0x3f);
+}
+
+static __inline unsigned int
+get_Imm16_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0xffff);
+}
+
+static __inline unsigned int
+get_Imm16_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0xffff);
+}
+
+static __inline unsigned int
+get_Imm8OpcodeExtension_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 20)) & 0xff);
+}
+
+static __inline unsigned int
+get_Imm8OpcodeExtension_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 51)) & 0xff);
+}
+
+static __inline unsigned int
+get_Imm8_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0xff);
+}
+
+static __inline unsigned int
+get_Imm8_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0xff);
+}
+
+static __inline unsigned int
+get_Imm8_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0xff);
+}
+
+static __inline unsigned int
+get_Imm8_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0xff);
+}
+
+static __inline unsigned int
+get_JumpOff_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 31)) & 0x7ffffff);
+}
+
+static __inline unsigned int
+get_JumpOpcodeExtension_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 58)) & 0x1);
+}
+
+static __inline unsigned int
+get_MF_Imm14_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 37)) & 0x3fff);
+}
+
+static __inline unsigned int
+get_MT_Imm14_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 31)) & 0x0000003f) |
+ (((unsigned int)(n >> 37)) & 0x00003fc0);
+}
+
+static __inline unsigned int
+get_Mode(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 62)) & 0x3);
+}
+
+static __inline unsigned int
+get_Opcode_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 28)) & 0x7);
+}
+
+static __inline unsigned int
+get_Opcode_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 59)) & 0x7);
+}
+
+static __inline unsigned int
+get_Opcode_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 27)) & 0xf);
+}
+
+static __inline unsigned int
+get_Opcode_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 58)) & 0xf);
+}
+
+static __inline unsigned int
+get_Opcode_Y2(tilegx_bundle_bits n)
+{
+ return (((n >> 26)) & 0x00000001) |
+ (((unsigned int)(n >> 56)) & 0x00000002);
+}
+
+static __inline unsigned int
+get_RRROpcodeExtension_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 18)) & 0x3ff);
+}
+
+static __inline unsigned int
+get_RRROpcodeExtension_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 49)) & 0x3ff);
+}
+
+static __inline unsigned int
+get_RRROpcodeExtension_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 18)) & 0x3);
+}
+
+static __inline unsigned int
+get_RRROpcodeExtension_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 49)) & 0x3);
+}
+
+static __inline unsigned int
+get_ShAmt_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_ShAmt_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0x3f);
+}
+
+static __inline unsigned int
+get_ShAmt_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_ShAmt_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0x3f);
+}
+
+static __inline unsigned int
+get_ShiftOpcodeExtension_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 18)) & 0x3ff);
+}
+
+static __inline unsigned int
+get_ShiftOpcodeExtension_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 49)) & 0x3ff);
+}
+
+static __inline unsigned int
+get_ShiftOpcodeExtension_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 18)) & 0x3);
+}
+
+static __inline unsigned int
+get_ShiftOpcodeExtension_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 49)) & 0x3);
+}
+
+static __inline unsigned int
+get_SrcA_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 6)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcA_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 37)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcA_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 6)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcA_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 37)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcA_Y2(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 20)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcBDest_Y2(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 51)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcB_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcB_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcB_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcB_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0x3f);
+}
+
+static __inline unsigned int
+get_UnaryOpcodeExtension_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_UnaryOpcodeExtension_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0x3f);
+}
+
+static __inline unsigned int
+get_UnaryOpcodeExtension_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_UnaryOpcodeExtension_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0x3f);
+}
+
+
+static __inline int
+sign_extend(int n, int num_bits)
+{
+ int shift = (int)(sizeof(int) * 8 - num_bits);
+ return (n << shift) >> shift;
+}
+
+
+
+static __inline tilegx_bundle_bits
+create_BFEnd_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_BFOpcodeExtension_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0xf) << 24);
+}
+
+static __inline tilegx_bundle_bits
+create_BFStart_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 18);
+}
+
+static __inline tilegx_bundle_bits
+create_BrOff_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x0000003f)) << 31) |
+ (((tilegx_bundle_bits)(n & 0x0001ffc0)) << 37);
+}
+
+static __inline tilegx_bundle_bits
+create_BrType_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x1f)) << 54);
+}
+
+static __inline tilegx_bundle_bits
+create_Dest_Imm8_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x0000003f)) << 31) |
+ (((tilegx_bundle_bits)(n & 0x000000c0)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_Dest_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 0);
+}
+
+static __inline tilegx_bundle_bits
+create_Dest_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 31);
+}
+
+static __inline tilegx_bundle_bits
+create_Dest_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 0);
+}
+
+static __inline tilegx_bundle_bits
+create_Dest_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 31);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm16_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0xffff) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm16_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0xffff)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm8OpcodeExtension_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0xff) << 20);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm8OpcodeExtension_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0xff)) << 51);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm8_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0xff) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm8_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0xff)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm8_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0xff) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm8_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0xff)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_JumpOff_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x7ffffff)) << 31);
+}
+
+static __inline tilegx_bundle_bits
+create_JumpOpcodeExtension_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x1)) << 58);
+}
+
+static __inline tilegx_bundle_bits
+create_MF_Imm14_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3fff)) << 37);
+}
+
+static __inline tilegx_bundle_bits
+create_MT_Imm14_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x0000003f)) << 31) |
+ (((tilegx_bundle_bits)(n & 0x00003fc0)) << 37);
+}
+
+static __inline tilegx_bundle_bits
+create_Mode(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3)) << 62);
+}
+
+static __inline tilegx_bundle_bits
+create_Opcode_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x7) << 28);
+}
+
+static __inline tilegx_bundle_bits
+create_Opcode_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x7)) << 59);
+}
+
+static __inline tilegx_bundle_bits
+create_Opcode_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0xf) << 27);
+}
+
+static __inline tilegx_bundle_bits
+create_Opcode_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0xf)) << 58);
+}
+
+static __inline tilegx_bundle_bits
+create_Opcode_Y2(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x00000001) << 26) |
+ (((tilegx_bundle_bits)(n & 0x00000002)) << 56);
+}
+
+static __inline tilegx_bundle_bits
+create_RRROpcodeExtension_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3ff) << 18);
+}
+
+static __inline tilegx_bundle_bits
+create_RRROpcodeExtension_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3ff)) << 49);
+}
+
+static __inline tilegx_bundle_bits
+create_RRROpcodeExtension_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3) << 18);
+}
+
+static __inline tilegx_bundle_bits
+create_RRROpcodeExtension_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3)) << 49);
+}
+
+static __inline tilegx_bundle_bits
+create_ShAmt_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_ShAmt_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_ShAmt_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_ShAmt_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_ShiftOpcodeExtension_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3ff) << 18);
+}
+
+static __inline tilegx_bundle_bits
+create_ShiftOpcodeExtension_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3ff)) << 49);
+}
+
+static __inline tilegx_bundle_bits
+create_ShiftOpcodeExtension_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3) << 18);
+}
+
+static __inline tilegx_bundle_bits
+create_ShiftOpcodeExtension_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3)) << 49);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcA_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 6);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcA_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 37);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcA_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 6);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcA_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 37);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcA_Y2(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 20);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcBDest_Y2(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 51);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcB_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcB_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcB_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcB_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_UnaryOpcodeExtension_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_UnaryOpcodeExtension_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_UnaryOpcodeExtension_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_UnaryOpcodeExtension_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
+}
+
+
+enum
+{
+ ADDI_IMM8_OPCODE_X0 = 1,
+ ADDI_IMM8_OPCODE_X1 = 1,
+ ADDI_OPCODE_Y0 = 0,
+ ADDI_OPCODE_Y1 = 1,
+ ADDLI_OPCODE_X0 = 1,
+ ADDLI_OPCODE_X1 = 0,
+ ADDXI_IMM8_OPCODE_X0 = 2,
+ ADDXI_IMM8_OPCODE_X1 = 2,
+ ADDXI_OPCODE_Y0 = 1,
+ ADDXI_OPCODE_Y1 = 2,
+ ADDXLI_OPCODE_X0 = 2,
+ ADDXLI_OPCODE_X1 = 1,
+ ADDXSC_RRR_0_OPCODE_X0 = 1,
+ ADDXSC_RRR_0_OPCODE_X1 = 1,
+ ADDX_RRR_0_OPCODE_X0 = 2,
+ ADDX_RRR_0_OPCODE_X1 = 2,
+ ADDX_RRR_0_OPCODE_Y0 = 0,
+ ADDX_SPECIAL_0_OPCODE_Y1 = 0,
+ ADD_RRR_0_OPCODE_X0 = 3,
+ ADD_RRR_0_OPCODE_X1 = 3,
+ ADD_RRR_0_OPCODE_Y0 = 1,
+ ADD_SPECIAL_0_OPCODE_Y1 = 1,
+ ANDI_IMM8_OPCODE_X0 = 3,
+ ANDI_IMM8_OPCODE_X1 = 3,
+ ANDI_OPCODE_Y0 = 2,
+ ANDI_OPCODE_Y1 = 3,
+ AND_RRR_0_OPCODE_X0 = 4,
+ AND_RRR_0_OPCODE_X1 = 4,
+ AND_RRR_5_OPCODE_Y0 = 0,
+ AND_RRR_5_OPCODE_Y1 = 0,
+ BEQZT_BRANCH_OPCODE_X1 = 16,
+ BEQZ_BRANCH_OPCODE_X1 = 17,
+ BFEXTS_BF_OPCODE_X0 = 4,
+ BFEXTU_BF_OPCODE_X0 = 5,
+ BFINS_BF_OPCODE_X0 = 6,
+ BF_OPCODE_X0 = 3,
+ BGEZT_BRANCH_OPCODE_X1 = 18,
+ BGEZ_BRANCH_OPCODE_X1 = 19,
+ BGTZT_BRANCH_OPCODE_X1 = 20,
+ BGTZ_BRANCH_OPCODE_X1 = 21,
+ BLBCT_BRANCH_OPCODE_X1 = 22,
+ BLBC_BRANCH_OPCODE_X1 = 23,
+ BLBST_BRANCH_OPCODE_X1 = 24,
+ BLBS_BRANCH_OPCODE_X1 = 25,
+ BLEZT_BRANCH_OPCODE_X1 = 26,
+ BLEZ_BRANCH_OPCODE_X1 = 27,
+ BLTZT_BRANCH_OPCODE_X1 = 28,
+ BLTZ_BRANCH_OPCODE_X1 = 29,
+ BNEZT_BRANCH_OPCODE_X1 = 30,
+ BNEZ_BRANCH_OPCODE_X1 = 31,
+ BRANCH_OPCODE_X1 = 2,
+ CMOVEQZ_RRR_0_OPCODE_X0 = 5,
+ CMOVEQZ_RRR_4_OPCODE_Y0 = 0,
+ CMOVNEZ_RRR_0_OPCODE_X0 = 6,
+ CMOVNEZ_RRR_4_OPCODE_Y0 = 1,
+ CMPEQI_IMM8_OPCODE_X0 = 4,
+ CMPEQI_IMM8_OPCODE_X1 = 4,
+ CMPEQI_OPCODE_Y0 = 3,
+ CMPEQI_OPCODE_Y1 = 4,
+ CMPEQ_RRR_0_OPCODE_X0 = 7,
+ CMPEQ_RRR_0_OPCODE_X1 = 5,
+ CMPEQ_RRR_3_OPCODE_Y0 = 0,
+ CMPEQ_RRR_3_OPCODE_Y1 = 2,
+ CMPEXCH4_RRR_0_OPCODE_X1 = 6,
+ CMPEXCH_RRR_0_OPCODE_X1 = 7,
+ CMPLES_RRR_0_OPCODE_X0 = 8,
+ CMPLES_RRR_0_OPCODE_X1 = 8,
+ CMPLES_RRR_2_OPCODE_Y0 = 0,
+ CMPLES_RRR_2_OPCODE_Y1 = 0,
+ CMPLEU_RRR_0_OPCODE_X0 = 9,
+ CMPLEU_RRR_0_OPCODE_X1 = 9,
+ CMPLEU_RRR_2_OPCODE_Y0 = 1,
+ CMPLEU_RRR_2_OPCODE_Y1 = 1,
+ CMPLTSI_IMM8_OPCODE_X0 = 5,
+ CMPLTSI_IMM8_OPCODE_X1 = 5,
+ CMPLTSI_OPCODE_Y0 = 4,
+ CMPLTSI_OPCODE_Y1 = 5,
+ CMPLTS_RRR_0_OPCODE_X0 = 10,
+ CMPLTS_RRR_0_OPCODE_X1 = 10,
+ CMPLTS_RRR_2_OPCODE_Y0 = 2,
+ CMPLTS_RRR_2_OPCODE_Y1 = 2,
+ CMPLTUI_IMM8_OPCODE_X0 = 6,
+ CMPLTUI_IMM8_OPCODE_X1 = 6,
+ CMPLTU_RRR_0_OPCODE_X0 = 11,
+ CMPLTU_RRR_0_OPCODE_X1 = 11,
+ CMPLTU_RRR_2_OPCODE_Y0 = 3,
+ CMPLTU_RRR_2_OPCODE_Y1 = 3,
+ CMPNE_RRR_0_OPCODE_X0 = 12,
+ CMPNE_RRR_0_OPCODE_X1 = 12,
+ CMPNE_RRR_3_OPCODE_Y0 = 1,
+ CMPNE_RRR_3_OPCODE_Y1 = 3,
+ CMULAF_RRR_0_OPCODE_X0 = 13,
+ CMULA_RRR_0_OPCODE_X0 = 14,
+ CMULFR_RRR_0_OPCODE_X0 = 15,
+ CMULF_RRR_0_OPCODE_X0 = 16,
+ CMULHR_RRR_0_OPCODE_X0 = 17,
+ CMULH_RRR_0_OPCODE_X0 = 18,
+ CMUL_RRR_0_OPCODE_X0 = 19,
+ CNTLZ_UNARY_OPCODE_X0 = 1,
+ CNTLZ_UNARY_OPCODE_Y0 = 1,
+ CNTTZ_UNARY_OPCODE_X0 = 2,
+ CNTTZ_UNARY_OPCODE_Y0 = 2,
+ CRC32_32_RRR_0_OPCODE_X0 = 20,
+ CRC32_8_RRR_0_OPCODE_X0 = 21,
+ DBLALIGN2_RRR_0_OPCODE_X0 = 22,
+ DBLALIGN2_RRR_0_OPCODE_X1 = 13,
+ DBLALIGN4_RRR_0_OPCODE_X0 = 23,
+ DBLALIGN4_RRR_0_OPCODE_X1 = 14,
+ DBLALIGN6_RRR_0_OPCODE_X0 = 24,
+ DBLALIGN6_RRR_0_OPCODE_X1 = 15,
+ DBLALIGN_RRR_0_OPCODE_X0 = 25,
+ DRAIN_UNARY_OPCODE_X1 = 1,
+ DTLBPR_UNARY_OPCODE_X1 = 2,
+ EXCH4_RRR_0_OPCODE_X1 = 16,
+ EXCH_RRR_0_OPCODE_X1 = 17,
+ FDOUBLE_ADDSUB_RRR_0_OPCODE_X0 = 26,
+ FDOUBLE_ADD_FLAGS_RRR_0_OPCODE_X0 = 27,
+ FDOUBLE_MUL_FLAGS_RRR_0_OPCODE_X0 = 28,
+ FDOUBLE_PACK1_RRR_0_OPCODE_X0 = 29,
+ FDOUBLE_PACK2_RRR_0_OPCODE_X0 = 30,
+ FDOUBLE_SUB_FLAGS_RRR_0_OPCODE_X0 = 31,
+ FDOUBLE_UNPACK_MAX_RRR_0_OPCODE_X0 = 32,
+ FDOUBLE_UNPACK_MIN_RRR_0_OPCODE_X0 = 33,
+ FETCHADD4_RRR_0_OPCODE_X1 = 18,
+ FETCHADDGEZ4_RRR_0_OPCODE_X1 = 19,
+ FETCHADDGEZ_RRR_0_OPCODE_X1 = 20,
+ FETCHADD_RRR_0_OPCODE_X1 = 21,
+ FETCHAND4_RRR_0_OPCODE_X1 = 22,
+ FETCHAND_RRR_0_OPCODE_X1 = 23,
+ FETCHOR4_RRR_0_OPCODE_X1 = 24,
+ FETCHOR_RRR_0_OPCODE_X1 = 25,
+ FINV_UNARY_OPCODE_X1 = 3,
+ FLUSHWB_UNARY_OPCODE_X1 = 4,
+ FLUSH_UNARY_OPCODE_X1 = 5,
+ FNOP_UNARY_OPCODE_X0 = 3,
+ FNOP_UNARY_OPCODE_X1 = 6,
+ FNOP_UNARY_OPCODE_Y0 = 3,
+ FNOP_UNARY_OPCODE_Y1 = 8,
+ FSINGLE_ADD1_RRR_0_OPCODE_X0 = 34,
+ FSINGLE_ADDSUB2_RRR_0_OPCODE_X0 = 35,
+ FSINGLE_MUL1_RRR_0_OPCODE_X0 = 36,
+ FSINGLE_MUL2_RRR_0_OPCODE_X0 = 37,
+ FSINGLE_PACK1_UNARY_OPCODE_X0 = 4,
+ FSINGLE_PACK1_UNARY_OPCODE_Y0 = 4,
+ FSINGLE_PACK2_RRR_0_OPCODE_X0 = 38,
+ FSINGLE_SUB1_RRR_0_OPCODE_X0 = 39,
+ ICOH_UNARY_OPCODE_X1 = 7,
+ ILL_UNARY_OPCODE_X1 = 8,
+ ILL_UNARY_OPCODE_Y1 = 9,
+ IMM8_OPCODE_X0 = 4,
+ IMM8_OPCODE_X1 = 3,
+ INV_UNARY_OPCODE_X1 = 9,
+ IRET_UNARY_OPCODE_X1 = 10,
+ JALRP_UNARY_OPCODE_X1 = 11,
+ JALRP_UNARY_OPCODE_Y1 = 10,
+ JALR_UNARY_OPCODE_X1 = 12,
+ JALR_UNARY_OPCODE_Y1 = 11,
+ JAL_JUMP_OPCODE_X1 = 0,
+ JRP_UNARY_OPCODE_X1 = 13,
+ JRP_UNARY_OPCODE_Y1 = 12,
+ JR_UNARY_OPCODE_X1 = 14,
+ JR_UNARY_OPCODE_Y1 = 13,
+ JUMP_OPCODE_X1 = 4,
+ J_JUMP_OPCODE_X1 = 1,
+ LD1S_ADD_IMM8_OPCODE_X1 = 7,
+ LD1S_OPCODE_Y2 = 0,
+ LD1S_UNARY_OPCODE_X1 = 15,
+ LD1U_ADD_IMM8_OPCODE_X1 = 8,
+ LD1U_OPCODE_Y2 = 1,
+ LD1U_UNARY_OPCODE_X1 = 16,
+ LD2S_ADD_IMM8_OPCODE_X1 = 9,
+ LD2S_OPCODE_Y2 = 2,
+ LD2S_UNARY_OPCODE_X1 = 17,
+ LD2U_ADD_IMM8_OPCODE_X1 = 10,
+ LD2U_OPCODE_Y2 = 3,
+ LD2U_UNARY_OPCODE_X1 = 18,
+ LD4S_ADD_IMM8_OPCODE_X1 = 11,
+ LD4S_OPCODE_Y2 = 1,
+ LD4S_UNARY_OPCODE_X1 = 19,
+ LD4U_ADD_IMM8_OPCODE_X1 = 12,
+ LD4U_OPCODE_Y2 = 2,
+ LD4U_UNARY_OPCODE_X1 = 20,
+ LDNA_UNARY_OPCODE_X1 = 21,
+ LDNT1S_ADD_IMM8_OPCODE_X1 = 13,
+ LDNT1S_UNARY_OPCODE_X1 = 22,
+ LDNT1U_ADD_IMM8_OPCODE_X1 = 14,
+ LDNT1U_UNARY_OPCODE_X1 = 23,
+ LDNT2S_ADD_IMM8_OPCODE_X1 = 15,
+ LDNT2S_UNARY_OPCODE_X1 = 24,
+ LDNT2U_ADD_IMM8_OPCODE_X1 = 16,
+ LDNT2U_UNARY_OPCODE_X1 = 25,
+ LDNT4S_ADD_IMM8_OPCODE_X1 = 17,
+ LDNT4S_UNARY_OPCODE_X1 = 26,
+ LDNT4U_ADD_IMM8_OPCODE_X1 = 18,
+ LDNT4U_UNARY_OPCODE_X1 = 27,
+ LDNT_ADD_IMM8_OPCODE_X1 = 19,
+ LDNT_UNARY_OPCODE_X1 = 28,
+ LD_ADD_IMM8_OPCODE_X1 = 20,
+ LD_OPCODE_Y2 = 3,
+ LD_UNARY_OPCODE_X1 = 29,
+ LNK_UNARY_OPCODE_X1 = 30,
+ LNK_UNARY_OPCODE_Y1 = 14,
+ LWNA_ADD_IMM8_OPCODE_X1 = 21,
+ MFSPR_IMM8_OPCODE_X1 = 22,
+ MF_UNARY_OPCODE_X1 = 31,
+ MM_BF_OPCODE_X0 = 7,
+ MNZ_RRR_0_OPCODE_X0 = 40,
+ MNZ_RRR_0_OPCODE_X1 = 26,
+ MNZ_RRR_4_OPCODE_Y0 = 2,
+ MNZ_RRR_4_OPCODE_Y1 = 2,
+ MODE_OPCODE_YA2 = 1,
+ MODE_OPCODE_YB2 = 2,
+ MODE_OPCODE_YC2 = 3,
+ MTSPR_IMM8_OPCODE_X1 = 23,
+ MULAX_RRR_0_OPCODE_X0 = 41,
+ MULAX_RRR_3_OPCODE_Y0 = 2,
+ MULA_HS_HS_RRR_0_OPCODE_X0 = 42,
+ MULA_HS_HS_RRR_9_OPCODE_Y0 = 0,
+ MULA_HS_HU_RRR_0_OPCODE_X0 = 43,
+ MULA_HS_LS_RRR_0_OPCODE_X0 = 44,
+ MULA_HS_LU_RRR_0_OPCODE_X0 = 45,
+ MULA_HU_HU_RRR_0_OPCODE_X0 = 46,
+ MULA_HU_HU_RRR_9_OPCODE_Y0 = 1,
+ MULA_HU_LS_RRR_0_OPCODE_X0 = 47,
+ MULA_HU_LU_RRR_0_OPCODE_X0 = 48,
+ MULA_LS_LS_RRR_0_OPCODE_X0 = 49,
+ MULA_LS_LS_RRR_9_OPCODE_Y0 = 2,
+ MULA_LS_LU_RRR_0_OPCODE_X0 = 50,
+ MULA_LU_LU_RRR_0_OPCODE_X0 = 51,
+ MULA_LU_LU_RRR_9_OPCODE_Y0 = 3,
+ MULX_RRR_0_OPCODE_X0 = 52,
+ MULX_RRR_3_OPCODE_Y0 = 3,
+ MUL_HS_HS_RRR_0_OPCODE_X0 = 53,
+ MUL_HS_HS_RRR_8_OPCODE_Y0 = 0,
+ MUL_HS_HU_RRR_0_OPCODE_X0 = 54,
+ MUL_HS_LS_RRR_0_OPCODE_X0 = 55,
+ MUL_HS_LU_RRR_0_OPCODE_X0 = 56,
+ MUL_HU_HU_RRR_0_OPCODE_X0 = 57,
+ MUL_HU_HU_RRR_8_OPCODE_Y0 = 1,
+ MUL_HU_LS_RRR_0_OPCODE_X0 = 58,
+ MUL_HU_LU_RRR_0_OPCODE_X0 = 59,
+ MUL_LS_LS_RRR_0_OPCODE_X0 = 60,
+ MUL_LS_LS_RRR_8_OPCODE_Y0 = 2,
+ MUL_LS_LU_RRR_0_OPCODE_X0 = 61,
+ MUL_LU_LU_RRR_0_OPCODE_X0 = 62,
+ MUL_LU_LU_RRR_8_OPCODE_Y0 = 3,
+ MZ_RRR_0_OPCODE_X0 = 63,
+ MZ_RRR_0_OPCODE_X1 = 27,
+ MZ_RRR_4_OPCODE_Y0 = 3,
+ MZ_RRR_4_OPCODE_Y1 = 3,
+ NAP_UNARY_OPCODE_X1 = 32,
+ NOP_UNARY_OPCODE_X0 = 5,
+ NOP_UNARY_OPCODE_X1 = 33,
+ NOP_UNARY_OPCODE_Y0 = 5,
+ NOP_UNARY_OPCODE_Y1 = 15,
+ NOR_RRR_0_OPCODE_X0 = 64,
+ NOR_RRR_0_OPCODE_X1 = 28,
+ NOR_RRR_5_OPCODE_Y0 = 1,
+ NOR_RRR_5_OPCODE_Y1 = 1,
+ ORI_IMM8_OPCODE_X0 = 7,
+ ORI_IMM8_OPCODE_X1 = 24,
+ OR_RRR_0_OPCODE_X0 = 65,
+ OR_RRR_0_OPCODE_X1 = 29,
+ OR_RRR_5_OPCODE_Y0 = 2,
+ OR_RRR_5_OPCODE_Y1 = 2,
+ PCNT_UNARY_OPCODE_X0 = 6,
+ PCNT_UNARY_OPCODE_Y0 = 6,
+ REVBITS_UNARY_OPCODE_X0 = 7,
+ REVBITS_UNARY_OPCODE_Y0 = 7,
+ REVBYTES_UNARY_OPCODE_X0 = 8,
+ REVBYTES_UNARY_OPCODE_Y0 = 8,
+ ROTLI_SHIFT_OPCODE_X0 = 1,
+ ROTLI_SHIFT_OPCODE_X1 = 1,
+ ROTLI_SHIFT_OPCODE_Y0 = 0,
+ ROTLI_SHIFT_OPCODE_Y1 = 0,
+ ROTL_RRR_0_OPCODE_X0 = 66,
+ ROTL_RRR_0_OPCODE_X1 = 30,
+ ROTL_RRR_6_OPCODE_Y0 = 0,
+ ROTL_RRR_6_OPCODE_Y1 = 0,
+ RRR_0_OPCODE_X0 = 5,
+ RRR_0_OPCODE_X1 = 5,
+ RRR_0_OPCODE_Y0 = 5,
+ RRR_0_OPCODE_Y1 = 6,
+ RRR_1_OPCODE_Y0 = 6,
+ RRR_1_OPCODE_Y1 = 7,
+ RRR_2_OPCODE_Y0 = 7,
+ RRR_2_OPCODE_Y1 = 8,
+ RRR_3_OPCODE_Y0 = 8,
+ RRR_3_OPCODE_Y1 = 9,
+ RRR_4_OPCODE_Y0 = 9,
+ RRR_4_OPCODE_Y1 = 10,
+ RRR_5_OPCODE_Y0 = 10,
+ RRR_5_OPCODE_Y1 = 11,
+ RRR_6_OPCODE_Y0 = 11,
+ RRR_6_OPCODE_Y1 = 12,
+ RRR_7_OPCODE_Y0 = 12,
+ RRR_7_OPCODE_Y1 = 13,
+ RRR_8_OPCODE_Y0 = 13,
+ RRR_9_OPCODE_Y0 = 14,
+ SHIFT_OPCODE_X0 = 6,
+ SHIFT_OPCODE_X1 = 6,
+ SHIFT_OPCODE_Y0 = 15,
+ SHIFT_OPCODE_Y1 = 14,
+ SHL16INSLI_OPCODE_X0 = 7,
+ SHL16INSLI_OPCODE_X1 = 7,
+ SHL1ADDX_RRR_0_OPCODE_X0 = 67,
+ SHL1ADDX_RRR_0_OPCODE_X1 = 31,
+ SHL1ADDX_RRR_7_OPCODE_Y0 = 1,
+ SHL1ADDX_RRR_7_OPCODE_Y1 = 1,
+ SHL1ADD_RRR_0_OPCODE_X0 = 68,
+ SHL1ADD_RRR_0_OPCODE_X1 = 32,
+ SHL1ADD_RRR_1_OPCODE_Y0 = 0,
+ SHL1ADD_RRR_1_OPCODE_Y1 = 0,
+ SHL2ADDX_RRR_0_OPCODE_X0 = 69,
+ SHL2ADDX_RRR_0_OPCODE_X1 = 33,
+ SHL2ADDX_RRR_7_OPCODE_Y0 = 2,
+ SHL2ADDX_RRR_7_OPCODE_Y1 = 2,
+ SHL2ADD_RRR_0_OPCODE_X0 = 70,
+ SHL2ADD_RRR_0_OPCODE_X1 = 34,
+ SHL2ADD_RRR_1_OPCODE_Y0 = 1,
+ SHL2ADD_RRR_1_OPCODE_Y1 = 1,
+ SHL3ADDX_RRR_0_OPCODE_X0 = 71,
+ SHL3ADDX_RRR_0_OPCODE_X1 = 35,
+ SHL3ADDX_RRR_7_OPCODE_Y0 = 3,
+ SHL3ADDX_RRR_7_OPCODE_Y1 = 3,
+ SHL3ADD_RRR_0_OPCODE_X0 = 72,
+ SHL3ADD_RRR_0_OPCODE_X1 = 36,
+ SHL3ADD_RRR_1_OPCODE_Y0 = 2,
+ SHL3ADD_RRR_1_OPCODE_Y1 = 2,
+ SHLI_SHIFT_OPCODE_X0 = 2,
+ SHLI_SHIFT_OPCODE_X1 = 2,
+ SHLI_SHIFT_OPCODE_Y0 = 1,
+ SHLI_SHIFT_OPCODE_Y1 = 1,
+ SHLXI_SHIFT_OPCODE_X0 = 3,
+ SHLXI_SHIFT_OPCODE_X1 = 3,
+ SHLX_RRR_0_OPCODE_X0 = 73,
+ SHLX_RRR_0_OPCODE_X1 = 37,
+ SHL_RRR_0_OPCODE_X0 = 74,
+ SHL_RRR_0_OPCODE_X1 = 38,
+ SHL_RRR_6_OPCODE_Y0 = 1,
+ SHL_RRR_6_OPCODE_Y1 = 1,
+ SHRSI_SHIFT_OPCODE_X0 = 4,
+ SHRSI_SHIFT_OPCODE_X1 = 4,
+ SHRSI_SHIFT_OPCODE_Y0 = 2,
+ SHRSI_SHIFT_OPCODE_Y1 = 2,
+ SHRS_RRR_0_OPCODE_X0 = 75,
+ SHRS_RRR_0_OPCODE_X1 = 39,
+ SHRS_RRR_6_OPCODE_Y0 = 2,
+ SHRS_RRR_6_OPCODE_Y1 = 2,
+ SHRUI_SHIFT_OPCODE_X0 = 5,
+ SHRUI_SHIFT_OPCODE_X1 = 5,
+ SHRUI_SHIFT_OPCODE_Y0 = 3,
+ SHRUI_SHIFT_OPCODE_Y1 = 3,
+ SHRUXI_SHIFT_OPCODE_X0 = 6,
+ SHRUXI_SHIFT_OPCODE_X1 = 6,
+ SHRUX_RRR_0_OPCODE_X0 = 76,
+ SHRUX_RRR_0_OPCODE_X1 = 40,
+ SHRU_RRR_0_OPCODE_X0 = 77,
+ SHRU_RRR_0_OPCODE_X1 = 41,
+ SHRU_RRR_6_OPCODE_Y0 = 3,
+ SHRU_RRR_6_OPCODE_Y1 = 3,
+ SHUFFLEBYTES_RRR_0_OPCODE_X0 = 78,
+ ST1_ADD_IMM8_OPCODE_X1 = 25,
+ ST1_OPCODE_Y2 = 0,
+ ST1_RRR_0_OPCODE_X1 = 42,
+ ST2_ADD_IMM8_OPCODE_X1 = 26,
+ ST2_OPCODE_Y2 = 1,
+ ST2_RRR_0_OPCODE_X1 = 43,
+ ST4_ADD_IMM8_OPCODE_X1 = 27,
+ ST4_OPCODE_Y2 = 2,
+ ST4_RRR_0_OPCODE_X1 = 44,
+ STNT1_ADD_IMM8_OPCODE_X1 = 28,
+ STNT1_RRR_0_OPCODE_X1 = 45,
+ STNT2_ADD_IMM8_OPCODE_X1 = 29,
+ STNT2_RRR_0_OPCODE_X1 = 46,
+ STNT4_ADD_IMM8_OPCODE_X1 = 30,
+ STNT4_RRR_0_OPCODE_X1 = 47,
+ STNT_ADD_IMM8_OPCODE_X1 = 31,
+ STNT_RRR_0_OPCODE_X1 = 48,
+ ST_ADD_IMM8_OPCODE_X1 = 32,
+ ST_OPCODE_Y2 = 3,
+ ST_RRR_0_OPCODE_X1 = 49,
+ SUBXSC_RRR_0_OPCODE_X0 = 79,
+ SUBXSC_RRR_0_OPCODE_X1 = 50,
+ SUBX_RRR_0_OPCODE_X0 = 80,
+ SUBX_RRR_0_OPCODE_X1 = 51,
+ SUBX_RRR_0_OPCODE_Y0 = 2,
+ SUBX_RRR_0_OPCODE_Y1 = 2,
+ SUB_RRR_0_OPCODE_X0 = 81,
+ SUB_RRR_0_OPCODE_X1 = 52,
+ SUB_RRR_0_OPCODE_Y0 = 3,
+ SUB_RRR_0_OPCODE_Y1 = 3,
+ SWINT0_UNARY_OPCODE_X1 = 34,
+ SWINT1_UNARY_OPCODE_X1 = 35,
+ SWINT2_UNARY_OPCODE_X1 = 36,
+ SWINT3_UNARY_OPCODE_X1 = 37,
+ TBLIDXB0_UNARY_OPCODE_X0 = 9,
+ TBLIDXB0_UNARY_OPCODE_Y0 = 9,
+ TBLIDXB1_UNARY_OPCODE_X0 = 10,
+ TBLIDXB1_UNARY_OPCODE_Y0 = 10,
+ TBLIDXB2_UNARY_OPCODE_X0 = 11,
+ TBLIDXB2_UNARY_OPCODE_Y0 = 11,
+ TBLIDXB3_UNARY_OPCODE_X0 = 12,
+ TBLIDXB3_UNARY_OPCODE_Y0 = 12,
+ UNARY_RRR_0_OPCODE_X0 = 82,
+ UNARY_RRR_0_OPCODE_X1 = 53,
+ UNARY_RRR_1_OPCODE_Y0 = 3,
+ UNARY_RRR_1_OPCODE_Y1 = 3,
+ V1ADDI_IMM8_OPCODE_X0 = 8,
+ V1ADDI_IMM8_OPCODE_X1 = 33,
+ V1ADDUC_RRR_0_OPCODE_X0 = 83,
+ V1ADDUC_RRR_0_OPCODE_X1 = 54,
+ V1ADD_RRR_0_OPCODE_X0 = 84,
+ V1ADD_RRR_0_OPCODE_X1 = 55,
+ V1ADIFFU_RRR_0_OPCODE_X0 = 85,
+ V1AVGU_RRR_0_OPCODE_X0 = 86,
+ V1CMPEQI_IMM8_OPCODE_X0 = 9,
+ V1CMPEQI_IMM8_OPCODE_X1 = 34,
+ V1CMPEQ_RRR_0_OPCODE_X0 = 87,
+ V1CMPEQ_RRR_0_OPCODE_X1 = 56,
+ V1CMPLES_RRR_0_OPCODE_X0 = 88,
+ V1CMPLES_RRR_0_OPCODE_X1 = 57,
+ V1CMPLEU_RRR_0_OPCODE_X0 = 89,
+ V1CMPLEU_RRR_0_OPCODE_X1 = 58,
+ V1CMPLTSI_IMM8_OPCODE_X0 = 10,
+ V1CMPLTSI_IMM8_OPCODE_X1 = 35,
+ V1CMPLTS_RRR_0_OPCODE_X0 = 90,
+ V1CMPLTS_RRR_0_OPCODE_X1 = 59,
+ V1CMPLTUI_IMM8_OPCODE_X0 = 11,
+ V1CMPLTUI_IMM8_OPCODE_X1 = 36,
+ V1CMPLTU_RRR_0_OPCODE_X0 = 91,
+ V1CMPLTU_RRR_0_OPCODE_X1 = 60,
+ V1CMPNE_RRR_0_OPCODE_X0 = 92,
+ V1CMPNE_RRR_0_OPCODE_X1 = 61,
+ V1DDOTPUA_RRR_0_OPCODE_X0 = 161,
+ V1DDOTPUSA_RRR_0_OPCODE_X0 = 93,
+ V1DDOTPUS_RRR_0_OPCODE_X0 = 94,
+ V1DDOTPU_RRR_0_OPCODE_X0 = 162,
+ V1DOTPA_RRR_0_OPCODE_X0 = 95,
+ V1DOTPUA_RRR_0_OPCODE_X0 = 163,
+ V1DOTPUSA_RRR_0_OPCODE_X0 = 96,
+ V1DOTPUS_RRR_0_OPCODE_X0 = 97,
+ V1DOTPU_RRR_0_OPCODE_X0 = 164,
+ V1DOTP_RRR_0_OPCODE_X0 = 98,
+ V1INT_H_RRR_0_OPCODE_X0 = 99,
+ V1INT_H_RRR_0_OPCODE_X1 = 62,
+ V1INT_L_RRR_0_OPCODE_X0 = 100,
+ V1INT_L_RRR_0_OPCODE_X1 = 63,
+ V1MAXUI_IMM8_OPCODE_X0 = 12,
+ V1MAXUI_IMM8_OPCODE_X1 = 37,
+ V1MAXU_RRR_0_OPCODE_X0 = 101,
+ V1MAXU_RRR_0_OPCODE_X1 = 64,
+ V1MINUI_IMM8_OPCODE_X0 = 13,
+ V1MINUI_IMM8_OPCODE_X1 = 38,
+ V1MINU_RRR_0_OPCODE_X0 = 102,
+ V1MINU_RRR_0_OPCODE_X1 = 65,
+ V1MNZ_RRR_0_OPCODE_X0 = 103,
+ V1MNZ_RRR_0_OPCODE_X1 = 66,
+ V1MULTU_RRR_0_OPCODE_X0 = 104,
+ V1MULUS_RRR_0_OPCODE_X0 = 105,
+ V1MULU_RRR_0_OPCODE_X0 = 106,
+ V1MZ_RRR_0_OPCODE_X0 = 107,
+ V1MZ_RRR_0_OPCODE_X1 = 67,
+ V1SADAU_RRR_0_OPCODE_X0 = 108,
+ V1SADU_RRR_0_OPCODE_X0 = 109,
+ V1SHLI_SHIFT_OPCODE_X0 = 7,
+ V1SHLI_SHIFT_OPCODE_X1 = 7,
+ V1SHL_RRR_0_OPCODE_X0 = 110,
+ V1SHL_RRR_0_OPCODE_X1 = 68,
+ V1SHRSI_SHIFT_OPCODE_X0 = 8,
+ V1SHRSI_SHIFT_OPCODE_X1 = 8,
+ V1SHRS_RRR_0_OPCODE_X0 = 111,
+ V1SHRS_RRR_0_OPCODE_X1 = 69,
+ V1SHRUI_SHIFT_OPCODE_X0 = 9,
+ V1SHRUI_SHIFT_OPCODE_X1 = 9,
+ V1SHRU_RRR_0_OPCODE_X0 = 112,
+ V1SHRU_RRR_0_OPCODE_X1 = 70,
+ V1SUBUC_RRR_0_OPCODE_X0 = 113,
+ V1SUBUC_RRR_0_OPCODE_X1 = 71,
+ V1SUB_RRR_0_OPCODE_X0 = 114,
+ V1SUB_RRR_0_OPCODE_X1 = 72,
+ V2ADDI_IMM8_OPCODE_X0 = 14,
+ V2ADDI_IMM8_OPCODE_X1 = 39,
+ V2ADDSC_RRR_0_OPCODE_X0 = 115,
+ V2ADDSC_RRR_0_OPCODE_X1 = 73,
+ V2ADD_RRR_0_OPCODE_X0 = 116,
+ V2ADD_RRR_0_OPCODE_X1 = 74,
+ V2ADIFFS_RRR_0_OPCODE_X0 = 117,
+ V2AVGS_RRR_0_OPCODE_X0 = 118,
+ V2CMPEQI_IMM8_OPCODE_X0 = 15,
+ V2CMPEQI_IMM8_OPCODE_X1 = 40,
+ V2CMPEQ_RRR_0_OPCODE_X0 = 119,
+ V2CMPEQ_RRR_0_OPCODE_X1 = 75,
+ V2CMPLES_RRR_0_OPCODE_X0 = 120,
+ V2CMPLES_RRR_0_OPCODE_X1 = 76,
+ V2CMPLEU_RRR_0_OPCODE_X0 = 121,
+ V2CMPLEU_RRR_0_OPCODE_X1 = 77,
+ V2CMPLTSI_IMM8_OPCODE_X0 = 16,
+ V2CMPLTSI_IMM8_OPCODE_X1 = 41,
+ V2CMPLTS_RRR_0_OPCODE_X0 = 122,
+ V2CMPLTS_RRR_0_OPCODE_X1 = 78,
+ V2CMPLTUI_IMM8_OPCODE_X0 = 17,
+ V2CMPLTUI_IMM8_OPCODE_X1 = 42,
+ V2CMPLTU_RRR_0_OPCODE_X0 = 123,
+ V2CMPLTU_RRR_0_OPCODE_X1 = 79,
+ V2CMPNE_RRR_0_OPCODE_X0 = 124,
+ V2CMPNE_RRR_0_OPCODE_X1 = 80,
+ V2DOTPA_RRR_0_OPCODE_X0 = 125,
+ V2DOTP_RRR_0_OPCODE_X0 = 126,
+ V2INT_H_RRR_0_OPCODE_X0 = 127,
+ V2INT_H_RRR_0_OPCODE_X1 = 81,
+ V2INT_L_RRR_0_OPCODE_X0 = 128,
+ V2INT_L_RRR_0_OPCODE_X1 = 82,
+ V2MAXSI_IMM8_OPCODE_X0 = 18,
+ V2MAXSI_IMM8_OPCODE_X1 = 43,
+ V2MAXS_RRR_0_OPCODE_X0 = 129,
+ V2MAXS_RRR_0_OPCODE_X1 = 83,
+ V2MINSI_IMM8_OPCODE_X0 = 19,
+ V2MINSI_IMM8_OPCODE_X1 = 44,
+ V2MINS_RRR_0_OPCODE_X0 = 130,
+ V2MINS_RRR_0_OPCODE_X1 = 84,
+ V2MNZ_RRR_0_OPCODE_X0 = 131,
+ V2MNZ_RRR_0_OPCODE_X1 = 85,
+ V2MULFSC_RRR_0_OPCODE_X0 = 132,
+ V2MULS_RRR_0_OPCODE_X0 = 133,
+ V2MULTS_RRR_0_OPCODE_X0 = 134,
+ V2MZ_RRR_0_OPCODE_X0 = 135,
+ V2MZ_RRR_0_OPCODE_X1 = 86,
+ V2PACKH_RRR_0_OPCODE_X0 = 136,
+ V2PACKH_RRR_0_OPCODE_X1 = 87,
+ V2PACKL_RRR_0_OPCODE_X0 = 137,
+ V2PACKL_RRR_0_OPCODE_X1 = 88,
+ V2PACKUC_RRR_0_OPCODE_X0 = 138,
+ V2PACKUC_RRR_0_OPCODE_X1 = 89,
+ V2SADAS_RRR_0_OPCODE_X0 = 139,
+ V2SADAU_RRR_0_OPCODE_X0 = 140,
+ V2SADS_RRR_0_OPCODE_X0 = 141,
+ V2SADU_RRR_0_OPCODE_X0 = 142,
+ V2SHLI_SHIFT_OPCODE_X0 = 10,
+ V2SHLI_SHIFT_OPCODE_X1 = 10,
+ V2SHLSC_RRR_0_OPCODE_X0 = 143,
+ V2SHLSC_RRR_0_OPCODE_X1 = 90,
+ V2SHL_RRR_0_OPCODE_X0 = 144,
+ V2SHL_RRR_0_OPCODE_X1 = 91,
+ V2SHRSI_SHIFT_OPCODE_X0 = 11,
+ V2SHRSI_SHIFT_OPCODE_X1 = 11,
+ V2SHRS_RRR_0_OPCODE_X0 = 145,
+ V2SHRS_RRR_0_OPCODE_X1 = 92,
+ V2SHRUI_SHIFT_OPCODE_X0 = 12,
+ V2SHRUI_SHIFT_OPCODE_X1 = 12,
+ V2SHRU_RRR_0_OPCODE_X0 = 146,
+ V2SHRU_RRR_0_OPCODE_X1 = 93,
+ V2SUBSC_RRR_0_OPCODE_X0 = 147,
+ V2SUBSC_RRR_0_OPCODE_X1 = 94,
+ V2SUB_RRR_0_OPCODE_X0 = 148,
+ V2SUB_RRR_0_OPCODE_X1 = 95,
+ V4ADDSC_RRR_0_OPCODE_X0 = 149,
+ V4ADDSC_RRR_0_OPCODE_X1 = 96,
+ V4ADD_RRR_0_OPCODE_X0 = 150,
+ V4ADD_RRR_0_OPCODE_X1 = 97,
+ V4INT_H_RRR_0_OPCODE_X0 = 151,
+ V4INT_H_RRR_0_OPCODE_X1 = 98,
+ V4INT_L_RRR_0_OPCODE_X0 = 152,
+ V4INT_L_RRR_0_OPCODE_X1 = 99,
+ V4PACKSC_RRR_0_OPCODE_X0 = 153,
+ V4PACKSC_RRR_0_OPCODE_X1 = 100,
+ V4SHLSC_RRR_0_OPCODE_X0 = 154,
+ V4SHLSC_RRR_0_OPCODE_X1 = 101,
+ V4SHL_RRR_0_OPCODE_X0 = 155,
+ V4SHL_RRR_0_OPCODE_X1 = 102,
+ V4SHRS_RRR_0_OPCODE_X0 = 156,
+ V4SHRS_RRR_0_OPCODE_X1 = 103,
+ V4SHRU_RRR_0_OPCODE_X0 = 157,
+ V4SHRU_RRR_0_OPCODE_X1 = 104,
+ V4SUBSC_RRR_0_OPCODE_X0 = 158,
+ V4SUBSC_RRR_0_OPCODE_X1 = 105,
+ V4SUB_RRR_0_OPCODE_X0 = 159,
+ V4SUB_RRR_0_OPCODE_X1 = 106,
+ WH64_UNARY_OPCODE_X1 = 38,
+ XORI_IMM8_OPCODE_X0 = 20,
+ XORI_IMM8_OPCODE_X1 = 45,
+ XOR_RRR_0_OPCODE_X0 = 160,
+ XOR_RRR_0_OPCODE_X1 = 107,
+ XOR_RRR_5_OPCODE_Y0 = 3,
+ XOR_RRR_5_OPCODE_Y1 = 3
+};
+
+
+#endif /* __ASSEMBLER__ */
+
+#endif /* __ARCH_OPCODE_H__ */
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Qemu-devel] [PATCH 05/10 v12] target-tilegx/opcode_tilegx.h: Modify it to fit QEMU usage
2015-06-13 13:07 [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (3 preceding siblings ...)
2015-06-13 13:14 ` [Qemu-devel] [PATCH 04/10 v12] target-tilegx: Add opcode basic implementation from Tilera Corporation Chen Gang
@ 2015-06-13 13:15 ` Chen Gang
2015-06-13 13:18 ` [Qemu-devel] [PATCH 07/10 v12] target-tilegx: Add cpu basic features for linux-user Chen Gang
` (5 subsequent siblings)
10 siblings, 0 replies; 21+ messages in thread
From: Chen Gang @ 2015-06-13 13:15 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
Use 'inline' instead of '__inline', and also use 'uint64_t' instead of
"unsigned long long"
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
target-tilegx/opcode_tilegx.h | 220 +++++++++++++++++++++---------------------
1 file changed, 110 insertions(+), 110 deletions(-)
diff --git a/target-tilegx/opcode_tilegx.h b/target-tilegx/opcode_tilegx.h
index d76ff2d..33b71a9 100644
--- a/target-tilegx/opcode_tilegx.h
+++ b/target-tilegx/opcode_tilegx.h
@@ -23,7 +23,7 @@
#ifndef __ASSEMBLER__
-typedef unsigned long long tilegx_bundle_bits;
+typedef uint64_t tilegx_bundle_bits;
/* These are the bits that determine if a bundle is in the X encoding. */
#define TILEGX_BUNDLE_MODE_MASK ((tilegx_bundle_bits)3 << 62)
@@ -66,360 +66,360 @@ typedef tilegx_bundle_bits tile_bundle_bits;
/* 64-bit pattern for a { bpt ; nop } bundle. */
#define TILEGX_BPT_BUNDLE 0x286a44ae51485000ULL
-static __inline unsigned int
+static inline unsigned int
get_BFEnd_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 12)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_BFOpcodeExtension_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 24)) & 0xf);
}
-static __inline unsigned int
+static inline unsigned int
get_BFStart_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 18)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_BrOff_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 31)) & 0x0000003f) |
(((unsigned int)(n >> 37)) & 0x0001ffc0);
}
-static __inline unsigned int
+static inline unsigned int
get_BrType_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 54)) & 0x1f);
}
-static __inline unsigned int
+static inline unsigned int
get_Dest_Imm8_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 31)) & 0x0000003f) |
(((unsigned int)(n >> 43)) & 0x000000c0);
}
-static __inline unsigned int
+static inline unsigned int
get_Dest_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 0)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_Dest_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 31)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_Dest_Y0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 0)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_Dest_Y1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 31)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_Imm16_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 12)) & 0xffff);
}
-static __inline unsigned int
+static inline unsigned int
get_Imm16_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 43)) & 0xffff);
}
-static __inline unsigned int
+static inline unsigned int
get_Imm8OpcodeExtension_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 20)) & 0xff);
}
-static __inline unsigned int
+static inline unsigned int
get_Imm8OpcodeExtension_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 51)) & 0xff);
}
-static __inline unsigned int
+static inline unsigned int
get_Imm8_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 12)) & 0xff);
}
-static __inline unsigned int
+static inline unsigned int
get_Imm8_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 43)) & 0xff);
}
-static __inline unsigned int
+static inline unsigned int
get_Imm8_Y0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 12)) & 0xff);
}
-static __inline unsigned int
+static inline unsigned int
get_Imm8_Y1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 43)) & 0xff);
}
-static __inline unsigned int
+static inline unsigned int
get_JumpOff_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 31)) & 0x7ffffff);
}
-static __inline unsigned int
+static inline unsigned int
get_JumpOpcodeExtension_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 58)) & 0x1);
}
-static __inline unsigned int
+static inline unsigned int
get_MF_Imm14_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 37)) & 0x3fff);
}
-static __inline unsigned int
+static inline unsigned int
get_MT_Imm14_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 31)) & 0x0000003f) |
(((unsigned int)(n >> 37)) & 0x00003fc0);
}
-static __inline unsigned int
+static inline unsigned int
get_Mode(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 62)) & 0x3);
}
-static __inline unsigned int
+static inline unsigned int
get_Opcode_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 28)) & 0x7);
}
-static __inline unsigned int
+static inline unsigned int
get_Opcode_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 59)) & 0x7);
}
-static __inline unsigned int
+static inline unsigned int
get_Opcode_Y0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 27)) & 0xf);
}
-static __inline unsigned int
+static inline unsigned int
get_Opcode_Y1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 58)) & 0xf);
}
-static __inline unsigned int
+static inline unsigned int
get_Opcode_Y2(tilegx_bundle_bits n)
{
return (((n >> 26)) & 0x00000001) |
(((unsigned int)(n >> 56)) & 0x00000002);
}
-static __inline unsigned int
+static inline unsigned int
get_RRROpcodeExtension_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 18)) & 0x3ff);
}
-static __inline unsigned int
+static inline unsigned int
get_RRROpcodeExtension_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 49)) & 0x3ff);
}
-static __inline unsigned int
+static inline unsigned int
get_RRROpcodeExtension_Y0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 18)) & 0x3);
}
-static __inline unsigned int
+static inline unsigned int
get_RRROpcodeExtension_Y1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 49)) & 0x3);
}
-static __inline unsigned int
+static inline unsigned int
get_ShAmt_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 12)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_ShAmt_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 43)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_ShAmt_Y0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 12)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_ShAmt_Y1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 43)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_ShiftOpcodeExtension_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 18)) & 0x3ff);
}
-static __inline unsigned int
+static inline unsigned int
get_ShiftOpcodeExtension_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 49)) & 0x3ff);
}
-static __inline unsigned int
+static inline unsigned int
get_ShiftOpcodeExtension_Y0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 18)) & 0x3);
}
-static __inline unsigned int
+static inline unsigned int
get_ShiftOpcodeExtension_Y1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 49)) & 0x3);
}
-static __inline unsigned int
+static inline unsigned int
get_SrcA_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 6)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_SrcA_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 37)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_SrcA_Y0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 6)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_SrcA_Y1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 37)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_SrcA_Y2(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 20)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_SrcBDest_Y2(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 51)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_SrcB_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 12)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_SrcB_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 43)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_SrcB_Y0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 12)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_SrcB_Y1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 43)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_UnaryOpcodeExtension_X0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 12)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_UnaryOpcodeExtension_X1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 43)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_UnaryOpcodeExtension_Y0(tilegx_bundle_bits num)
{
const unsigned int n = (unsigned int)num;
return (((n >> 12)) & 0x3f);
}
-static __inline unsigned int
+static inline unsigned int
get_UnaryOpcodeExtension_Y1(tilegx_bundle_bits n)
{
return (((unsigned int)(n >> 43)) & 0x3f);
}
-static __inline int
+static inline int
sign_extend(int n, int num_bits)
{
int shift = (int)(sizeof(int) * 8 - num_bits);
@@ -428,28 +428,28 @@ sign_extend(int n, int num_bits)
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_BFEnd_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3f) << 12);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_BFOpcodeExtension_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0xf) << 24);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_BFStart_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3f) << 18);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_BrOff_X1(int num)
{
const unsigned int n = (unsigned int)num;
@@ -457,14 +457,14 @@ create_BrOff_X1(int num)
(((tilegx_bundle_bits)(n & 0x0001ffc0)) << 37);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_BrType_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x1f)) << 54);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Dest_Imm8_X1(int num)
{
const unsigned int n = (unsigned int)num;
@@ -472,112 +472,112 @@ create_Dest_Imm8_X1(int num)
(((tilegx_bundle_bits)(n & 0x000000c0)) << 43);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Dest_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3f) << 0);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Dest_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3f)) << 31);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Dest_Y0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3f) << 0);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Dest_Y1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3f)) << 31);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Imm16_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0xffff) << 12);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Imm16_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0xffff)) << 43);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Imm8OpcodeExtension_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0xff) << 20);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Imm8OpcodeExtension_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0xff)) << 51);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Imm8_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0xff) << 12);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Imm8_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0xff)) << 43);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Imm8_Y0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0xff) << 12);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Imm8_Y1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0xff)) << 43);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_JumpOff_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x7ffffff)) << 31);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_JumpOpcodeExtension_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x1)) << 58);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_MF_Imm14_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3fff)) << 37);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_MT_Imm14_X1(int num)
{
const unsigned int n = (unsigned int)num;
@@ -585,42 +585,42 @@ create_MT_Imm14_X1(int num)
(((tilegx_bundle_bits)(n & 0x00003fc0)) << 37);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Mode(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3)) << 62);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Opcode_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x7) << 28);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Opcode_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x7)) << 59);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Opcode_Y0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0xf) << 27);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Opcode_Y1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0xf)) << 58);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_Opcode_Y2(int num)
{
const unsigned int n = (unsigned int)num;
@@ -628,182 +628,182 @@ create_Opcode_Y2(int num)
(((tilegx_bundle_bits)(n & 0x00000002)) << 56);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_RRROpcodeExtension_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3ff) << 18);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_RRROpcodeExtension_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3ff)) << 49);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_RRROpcodeExtension_Y0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3) << 18);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_RRROpcodeExtension_Y1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3)) << 49);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_ShAmt_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3f) << 12);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_ShAmt_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_ShAmt_Y0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3f) << 12);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_ShAmt_Y1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_ShiftOpcodeExtension_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3ff) << 18);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_ShiftOpcodeExtension_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3ff)) << 49);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_ShiftOpcodeExtension_Y0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3) << 18);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_ShiftOpcodeExtension_Y1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3)) << 49);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_SrcA_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3f) << 6);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_SrcA_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3f)) << 37);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_SrcA_Y0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3f) << 6);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_SrcA_Y1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3f)) << 37);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_SrcA_Y2(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3f) << 20);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_SrcBDest_Y2(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3f)) << 51);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_SrcB_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3f) << 12);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_SrcB_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_SrcB_Y0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3f) << 12);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_SrcB_Y1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_UnaryOpcodeExtension_X0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3f) << 12);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_UnaryOpcodeExtension_X1(int num)
{
const unsigned int n = (unsigned int)num;
return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_UnaryOpcodeExtension_Y0(int num)
{
const unsigned int n = (unsigned int)num;
return ((n & 0x3f) << 12);
}
-static __inline tilegx_bundle_bits
+static inline tilegx_bundle_bits
create_UnaryOpcodeExtension_Y1(int num)
{
const unsigned int n = (unsigned int)num;
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Qemu-devel] [PATCH 07/10 v12] target-tilegx: Add cpu basic features for linux-user
2015-06-13 13:07 [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (4 preceding siblings ...)
2015-06-13 13:15 ` [Qemu-devel] [PATCH 05/10 v12] target-tilegx/opcode_tilegx.h: Modify it to fit QEMU usage Chen Gang
@ 2015-06-13 13:18 ` Chen Gang
2015-06-13 13:18 ` [Qemu-devel] [PATCH 06/10 v12] target-tilegx: Add special register information from Tilera Corporation Chen Gang
` (4 subsequent siblings)
10 siblings, 0 replies; 21+ messages in thread
From: Chen Gang @ 2015-06-13 13:18 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
It implements minimized cpu features for linux-user.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
target-tilegx/cpu.c | 143 ++++++++++++++++++++++++++++++++++++++++++
target-tilegx/cpu.h | 175 ++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 318 insertions(+)
create mode 100644 target-tilegx/cpu.c
create mode 100644 target-tilegx/cpu.h
diff --git a/target-tilegx/cpu.c b/target-tilegx/cpu.c
new file mode 100644
index 0000000..663fcb6
--- /dev/null
+++ b/target-tilegx/cpu.c
@@ -0,0 +1,143 @@
+/*
+ * QEMU TILE-Gx CPU
+ *
+ * Copyright (c) 2015 Chen Gang
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see
+ * <http://www.gnu.org/licenses/lgpl-2.1.html>
+ */
+
+#include "cpu.h"
+#include "qemu-common.h"
+#include "hw/qdev-properties.h"
+#include "migration/vmstate.h"
+
+TileGXCPU *cpu_tilegx_init(const char *cpu_model)
+{
+ TileGXCPU *cpu;
+
+ cpu = TILEGX_CPU(object_new(TYPE_TILEGX_CPU));
+
+ object_property_set_bool(OBJECT(cpu), true, "realized", NULL);
+
+ return cpu;
+}
+
+static void tilegx_cpu_set_pc(CPUState *cs, vaddr value)
+{
+ TileGXCPU *cpu = TILEGX_CPU(cs);
+
+ cpu->env.pc = value;
+}
+
+static bool tilegx_cpu_has_work(CPUState *cs)
+{
+ return true;
+}
+
+static void tilegx_cpu_reset(CPUState *s)
+{
+ TileGXCPU *cpu = TILEGX_CPU(s);
+ TileGXCPUClass *tcc = TILEGX_CPU_GET_CLASS(cpu);
+ CPUTLGState *env = &cpu->env;
+
+ tcc->parent_reset(s);
+
+ memset(env, 0, sizeof(CPUTLGState));
+ tlb_flush(s, 1);
+}
+
+static void tilegx_cpu_realizefn(DeviceState *dev, Error **errp)
+{
+ CPUState *cs = CPU(dev);
+ TileGXCPUClass *tcc = TILEGX_CPU_GET_CLASS(dev);
+
+ cpu_reset(cs);
+ qemu_init_vcpu(cs);
+
+ tcc->parent_realize(dev, errp);
+}
+
+static void tilegx_cpu_initfn(Object *obj)
+{
+ CPUState *cs = CPU(obj);
+ TileGXCPU *cpu = TILEGX_CPU(obj);
+ CPUTLGState *env = &cpu->env;
+ static bool tcg_initialized;
+
+ cs->env_ptr = env;
+ cpu_exec_init(env);
+
+ if (tcg_enabled() && !tcg_initialized) {
+ tcg_initialized = true;
+ tilegx_tcg_init();
+ }
+}
+
+static void tilegx_cpu_do_interrupt(CPUState *cs)
+{
+ cs->exception_index = -1;
+}
+
+static int tilegx_cpu_handle_mmu_fault(CPUState *cs, vaddr address, int rw,
+ int mmu_idx)
+{
+ cpu_dump_state(cs, stderr, fprintf, 0);
+ return 1;
+}
+
+static bool tilegx_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
+{
+ if (interrupt_request & CPU_INTERRUPT_HARD) {
+ tilegx_cpu_do_interrupt(cs);
+ return true;
+ }
+ return false;
+}
+
+static void tilegx_cpu_class_init(ObjectClass *oc, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(oc);
+ CPUClass *cc = CPU_CLASS(oc);
+ TileGXCPUClass *tcc = TILEGX_CPU_CLASS(oc);
+
+ tcc->parent_realize = dc->realize;
+ dc->realize = tilegx_cpu_realizefn;
+
+ tcc->parent_reset = cc->reset;
+ cc->reset = tilegx_cpu_reset;
+
+ cc->has_work = tilegx_cpu_has_work;
+ cc->do_interrupt = tilegx_cpu_do_interrupt;
+ cc->cpu_exec_interrupt = tilegx_cpu_exec_interrupt;
+ cc->set_pc = tilegx_cpu_set_pc;
+ cc->handle_mmu_fault = tilegx_cpu_handle_mmu_fault;
+ cc->gdb_num_core_regs = 0;
+}
+
+static const TypeInfo tilegx_cpu_type_info = {
+ .name = TYPE_TILEGX_CPU,
+ .parent = TYPE_CPU,
+ .instance_size = sizeof(TileGXCPU),
+ .instance_init = tilegx_cpu_initfn,
+ .class_size = sizeof(TileGXCPUClass),
+ .class_init = tilegx_cpu_class_init,
+};
+
+static void tilegx_cpu_register_types(void)
+{
+ type_register_static(&tilegx_cpu_type_info);
+}
+
+type_init(tilegx_cpu_register_types)
diff --git a/target-tilegx/cpu.h b/target-tilegx/cpu.h
new file mode 100644
index 0000000..e404025
--- /dev/null
+++ b/target-tilegx/cpu.h
@@ -0,0 +1,175 @@
+/*
+ * TILE-Gx virtual CPU header
+ *
+ * Copyright (c) 2015 Chen Gang
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef CPU_TILEGX_H
+#define CPU_TILEGX_H
+
+#include "config.h"
+#include "qemu-common.h"
+
+#define TARGET_LONG_BITS 64
+
+#define CPUArchState struct CPUTLGState
+
+#include "exec/cpu-defs.h"
+
+
+/* TILE-Gx common register alias */
+#define TILEGX_R_RE 0 /* 0 register, for function/syscall return value */
+#define TILEGX_R_ERR 1 /* 1 register, for syscall errno flag */
+#define TILEGX_R_NR 10 /* 10 register, for syscall number */
+#define TILEGX_R_BP 52 /* 52 register, optional frame pointer */
+#define TILEGX_R_TP 53 /* TP register, thread local storage data */
+#define TILEGX_R_SP 54 /* SP register, stack pointer */
+#define TILEGX_R_LR 55 /* LR register, may save pc, but it is not pc */
+#define TILEGX_R_COUNT 56 /* Only 56 registers are really useful */
+#define TILEGX_R_SN 56 /* SN register, obsoleted, it likes zero register */
+#define TILEGX_R_IDN0 57 /* IDN0 register, cause IDN_ACCESS exception */
+#define TILEGX_R_IDN1 58 /* IDN1 register, cause IDN_ACCESS exception */
+#define TILEGX_R_UDN0 59 /* UDN0 register, cause UDN_ACCESS exception */
+#define TILEGX_R_UDN1 60 /* UDN1 register, cause UDN_ACCESS exception */
+#define TILEGX_R_UDN2 61 /* UDN2 register, cause UDN_ACCESS exception */
+#define TILEGX_R_UDN3 62 /* UDN3 register, cause UDN_ACCESS exception */
+#define TILEGX_R_ZERO 63 /* Zero register, always zero */
+#define TILEGX_R_NOREG 255 /* Invalid register value */
+
+/* TILE-Gx special registers used by outside */
+enum {
+ TILEGX_SPR_CMPEXCH = 0,
+ TILEGX_SPR_CRITICAL_SEC = 1,
+ TILEGX_SPR_SIM_CONTROL = 2,
+ TILEGX_SPR_COUNT
+};
+
+/* Exception numbers */
+enum {
+ TILEGX_EXCP_NONE = 0,
+ TILEGX_EXCP_SYSCALL = 1,
+ TILEGX_EXCP_OPCODE_UNKNOWN = 0x101,
+ TILEGX_EXCP_OPCODE_UNIMPLEMENTED = 0x102,
+ TILEGX_EXCP_OPCODE_CMPEXCH = 0x103,
+ TILEGX_EXCP_OPCODE_CMPEXCH4 = 0x104,
+ TILEGX_EXCP_OPCODE_EXCH = 0x105,
+ TILEGX_EXCP_OPCODE_EXCH4 = 0x106,
+ TILEGX_EXCP_OPCODE_FETCHADD = 0x107,
+ TILEGX_EXCP_OPCODE_FETCHADD4 = 0x108,
+ TILEGX_EXCP_OPCODE_FETCHADDGEZ = 0x109,
+ TILEGX_EXCP_OPCODE_FETCHADDGEZ4 = 0x10a,
+ TILEGX_EXCP_OPCODE_FETCHAND = 0x10b,
+ TILEGX_EXCP_OPCODE_FETCHAND4 = 0x10c,
+ TILEGX_EXCP_OPCODE_FETCHOR = 0x10d,
+ TILEGX_EXCP_OPCODE_FETCHOR4 = 0x10e,
+ TILEGX_EXCP_REG_IDN_ACCESS = 0x181,
+ TILEGX_EXCP_REG_UDN_ACCESS = 0x182,
+ TILEGX_EXCP_UNALIGNMENT = 0x201,
+ TILEGX_EXCP_DBUG_BREAK = 0x301
+};
+
+typedef struct CPUTLGState {
+ uint64_t regs[TILEGX_R_COUNT]; /* Common used registers by outside */
+ uint64_t spregs[TILEGX_SPR_COUNT]; /* Special used registers by outside */
+ uint64_t pc; /* Current pc */
+
+#if defined(CONFIG_USER_ONLY)
+ uint32_t excparam; /* exception parameter */
+#endif
+
+ CPU_COMMON
+} CPUTLGState;
+
+#include "qom/cpu.h"
+
+#define TYPE_TILEGX_CPU "tilegx-cpu"
+
+#define TILEGX_CPU_CLASS(klass) \
+ OBJECT_CLASS_CHECK(TileGXCPUClass, (klass), TYPE_TILEGX_CPU)
+#define TILEGX_CPU(obj) \
+ OBJECT_CHECK(TileGXCPU, (obj), TYPE_TILEGX_CPU)
+#define TILEGX_CPU_GET_CLASS(obj) \
+ OBJECT_GET_CLASS(TileGXCPUClass, (obj), TYPE_TILEGX_CPU)
+
+/**
+ * TileGXCPUClass:
+ * @parent_realize: The parent class' realize handler.
+ * @parent_reset: The parent class' reset handler.
+ *
+ * A Tile-Gx CPU model.
+ */
+typedef struct TileGXCPUClass {
+ /*< private >*/
+ CPUClass parent_class;
+ /*< public >*/
+
+ DeviceRealize parent_realize;
+ void (*parent_reset)(CPUState *cpu);
+} TileGXCPUClass;
+
+/**
+ * TileGXCPU:
+ * @env: #CPUTLGState
+ *
+ * A Tile-GX CPU.
+ */
+typedef struct TileGXCPU {
+ /*< private >*/
+ CPUState parent_obj;
+ /*< public >*/
+
+ CPUTLGState env;
+} TileGXCPU;
+
+static inline TileGXCPU *tilegx_env_get_cpu(CPUTLGState *env)
+{
+ return container_of(env, TileGXCPU, env);
+}
+
+#define ENV_GET_CPU(e) CPU(tilegx_env_get_cpu(e))
+
+#define ENV_OFFSET offsetof(TileGXCPU, env)
+
+/* TILE-Gx memory attributes */
+#define TARGET_PAGE_BITS 16 /* TILE-Gx uses 64KB page size */
+#define TARGET_PHYS_ADDR_SPACE_BITS 42
+#define TARGET_VIRT_ADDR_SPACE_BITS 64
+#define MMU_USER_IDX 0 /* Current memory operation is in user mode */
+
+#include "exec/cpu-all.h"
+
+void tilegx_tcg_init(void);
+int cpu_tilegx_exec(CPUTLGState *s);
+int cpu_tilegx_signal_handler(int host_signum, void *pinfo, void *puc);
+
+TileGXCPU *cpu_tilegx_init(const char *cpu_model);
+
+#define cpu_init(cpu_model) CPU(cpu_tilegx_init(cpu_model))
+
+#define cpu_exec cpu_tilegx_exec
+#define cpu_gen_code cpu_tilegx_gen_code
+#define cpu_signal_handler cpu_tilegx_signal_handler
+
+static inline void cpu_get_tb_cpu_state(CPUTLGState *env, target_ulong *pc,
+ target_ulong *cs_base, int *flags)
+{
+ *pc = env->pc;
+ *cs_base = 0;
+ *flags = 0;
+}
+
+#include "exec/exec-all.h"
+
+#endif
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Qemu-devel] [PATCH 06/10 v12] target-tilegx: Add special register information from Tilera Corporation
2015-06-13 13:07 [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (5 preceding siblings ...)
2015-06-13 13:18 ` [Qemu-devel] [PATCH 07/10 v12] target-tilegx: Add cpu basic features for linux-user Chen Gang
@ 2015-06-13 13:18 ` Chen Gang
2015-06-13 13:19 ` [Qemu-devel] [PATCH 08/10 v12] target-tilegx: Add several helpers for instructions translation Chen Gang
` (3 subsequent siblings)
10 siblings, 0 replies; 21+ messages in thread
From: Chen Gang @ 2015-06-13 13:18 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
The related copy is from Linux kernel "arch/tile/include/uapi/arch/
spr_def_64.h".
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
target-tilegx/spr_def_64.h | 216 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 216 insertions(+)
create mode 100644 target-tilegx/spr_def_64.h
diff --git a/target-tilegx/spr_def_64.h b/target-tilegx/spr_def_64.h
new file mode 100644
index 0000000..67a6c17
--- /dev/null
+++ b/target-tilegx/spr_def_64.h
@@ -0,0 +1,216 @@
+/*
+ * Copyright 2011 Tilera Corporation. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ * NON INFRINGEMENT. See the GNU General Public License for
+ * more details.
+ */
+
+#ifndef __DOXYGEN__
+
+#ifndef __ARCH_SPR_DEF_64_H__
+#define __ARCH_SPR_DEF_64_H__
+
+#define SPR_AUX_PERF_COUNT_0 0x2105
+#define SPR_AUX_PERF_COUNT_1 0x2106
+#define SPR_AUX_PERF_COUNT_CTL 0x2107
+#define SPR_AUX_PERF_COUNT_STS 0x2108
+#define SPR_CMPEXCH_VALUE 0x2780
+#define SPR_CYCLE 0x2781
+#define SPR_DONE 0x2705
+#define SPR_DSTREAM_PF 0x2706
+#define SPR_EVENT_BEGIN 0x2782
+#define SPR_EVENT_END 0x2783
+#define SPR_EX_CONTEXT_0_0 0x2580
+#define SPR_EX_CONTEXT_0_1 0x2581
+#define SPR_EX_CONTEXT_0_1__PL_SHIFT 0
+#define SPR_EX_CONTEXT_0_1__PL_RMASK 0x3
+#define SPR_EX_CONTEXT_0_1__PL_MASK 0x3
+#define SPR_EX_CONTEXT_0_1__ICS_SHIFT 2
+#define SPR_EX_CONTEXT_0_1__ICS_RMASK 0x1
+#define SPR_EX_CONTEXT_0_1__ICS_MASK 0x4
+#define SPR_EX_CONTEXT_1_0 0x2480
+#define SPR_EX_CONTEXT_1_1 0x2481
+#define SPR_EX_CONTEXT_1_1__PL_SHIFT 0
+#define SPR_EX_CONTEXT_1_1__PL_RMASK 0x3
+#define SPR_EX_CONTEXT_1_1__PL_MASK 0x3
+#define SPR_EX_CONTEXT_1_1__ICS_SHIFT 2
+#define SPR_EX_CONTEXT_1_1__ICS_RMASK 0x1
+#define SPR_EX_CONTEXT_1_1__ICS_MASK 0x4
+#define SPR_EX_CONTEXT_2_0 0x2380
+#define SPR_EX_CONTEXT_2_1 0x2381
+#define SPR_EX_CONTEXT_2_1__PL_SHIFT 0
+#define SPR_EX_CONTEXT_2_1__PL_RMASK 0x3
+#define SPR_EX_CONTEXT_2_1__PL_MASK 0x3
+#define SPR_EX_CONTEXT_2_1__ICS_SHIFT 2
+#define SPR_EX_CONTEXT_2_1__ICS_RMASK 0x1
+#define SPR_EX_CONTEXT_2_1__ICS_MASK 0x4
+#define SPR_FAIL 0x2707
+#define SPR_IDN_AVAIL_EN 0x1a05
+#define SPR_IDN_DATA_AVAIL 0x0a80
+#define SPR_IDN_DEADLOCK_TIMEOUT 0x1806
+#define SPR_IDN_DEMUX_COUNT_0 0x0a05
+#define SPR_IDN_DEMUX_COUNT_1 0x0a06
+#define SPR_IDN_DIRECTION_PROTECT 0x1405
+#define SPR_IDN_PENDING 0x0a08
+#define SPR_ILL_TRANS_REASON__I_STREAM_VA_RMASK 0x1
+#define SPR_INTCTRL_0_STATUS 0x2505
+#define SPR_INTCTRL_1_STATUS 0x2405
+#define SPR_INTCTRL_2_STATUS 0x2305
+#define SPR_INTERRUPT_CRITICAL_SECTION 0x2708
+#define SPR_INTERRUPT_MASK_0 0x2506
+#define SPR_INTERRUPT_MASK_1 0x2406
+#define SPR_INTERRUPT_MASK_2 0x2306
+#define SPR_INTERRUPT_MASK_RESET_0 0x2507
+#define SPR_INTERRUPT_MASK_RESET_1 0x2407
+#define SPR_INTERRUPT_MASK_RESET_2 0x2307
+#define SPR_INTERRUPT_MASK_SET_0 0x2508
+#define SPR_INTERRUPT_MASK_SET_1 0x2408
+#define SPR_INTERRUPT_MASK_SET_2 0x2308
+#define SPR_INTERRUPT_VECTOR_BASE_0 0x2509
+#define SPR_INTERRUPT_VECTOR_BASE_1 0x2409
+#define SPR_INTERRUPT_VECTOR_BASE_2 0x2309
+#define SPR_INTERRUPT_VECTOR_BASE_3 0x2209
+#define SPR_IPI_EVENT_0 0x1f05
+#define SPR_IPI_EVENT_1 0x1e05
+#define SPR_IPI_EVENT_2 0x1d05
+#define SPR_IPI_EVENT_RESET_0 0x1f06
+#define SPR_IPI_EVENT_RESET_1 0x1e06
+#define SPR_IPI_EVENT_RESET_2 0x1d06
+#define SPR_IPI_EVENT_SET_0 0x1f07
+#define SPR_IPI_EVENT_SET_1 0x1e07
+#define SPR_IPI_EVENT_SET_2 0x1d07
+#define SPR_IPI_MASK_0 0x1f08
+#define SPR_IPI_MASK_1 0x1e08
+#define SPR_IPI_MASK_2 0x1d08
+#define SPR_IPI_MASK_RESET_0 0x1f09
+#define SPR_IPI_MASK_RESET_1 0x1e09
+#define SPR_IPI_MASK_RESET_2 0x1d09
+#define SPR_IPI_MASK_SET_0 0x1f0a
+#define SPR_IPI_MASK_SET_1 0x1e0a
+#define SPR_IPI_MASK_SET_2 0x1d0a
+#define SPR_MPL_AUX_PERF_COUNT_SET_0 0x2100
+#define SPR_MPL_AUX_PERF_COUNT_SET_1 0x2101
+#define SPR_MPL_AUX_PERF_COUNT_SET_2 0x2102
+#define SPR_MPL_AUX_TILE_TIMER_SET_0 0x1700
+#define SPR_MPL_AUX_TILE_TIMER_SET_1 0x1701
+#define SPR_MPL_AUX_TILE_TIMER_SET_2 0x1702
+#define SPR_MPL_IDN_ACCESS_SET_0 0x0a00
+#define SPR_MPL_IDN_ACCESS_SET_1 0x0a01
+#define SPR_MPL_IDN_ACCESS_SET_2 0x0a02
+#define SPR_MPL_IDN_AVAIL_SET_0 0x1a00
+#define SPR_MPL_IDN_AVAIL_SET_1 0x1a01
+#define SPR_MPL_IDN_AVAIL_SET_2 0x1a02
+#define SPR_MPL_IDN_COMPLETE_SET_0 0x0500
+#define SPR_MPL_IDN_COMPLETE_SET_1 0x0501
+#define SPR_MPL_IDN_COMPLETE_SET_2 0x0502
+#define SPR_MPL_IDN_FIREWALL_SET_0 0x1400
+#define SPR_MPL_IDN_FIREWALL_SET_1 0x1401
+#define SPR_MPL_IDN_FIREWALL_SET_2 0x1402
+#define SPR_MPL_IDN_TIMER_SET_0 0x1800
+#define SPR_MPL_IDN_TIMER_SET_1 0x1801
+#define SPR_MPL_IDN_TIMER_SET_2 0x1802
+#define SPR_MPL_INTCTRL_0_SET_0 0x2500
+#define SPR_MPL_INTCTRL_0_SET_1 0x2501
+#define SPR_MPL_INTCTRL_0_SET_2 0x2502
+#define SPR_MPL_INTCTRL_1_SET_0 0x2400
+#define SPR_MPL_INTCTRL_1_SET_1 0x2401
+#define SPR_MPL_INTCTRL_1_SET_2 0x2402
+#define SPR_MPL_INTCTRL_2_SET_0 0x2300
+#define SPR_MPL_INTCTRL_2_SET_1 0x2301
+#define SPR_MPL_INTCTRL_2_SET_2 0x2302
+#define SPR_MPL_IPI_0 0x1f04
+#define SPR_MPL_IPI_0_SET_0 0x1f00
+#define SPR_MPL_IPI_0_SET_1 0x1f01
+#define SPR_MPL_IPI_0_SET_2 0x1f02
+#define SPR_MPL_IPI_1 0x1e04
+#define SPR_MPL_IPI_1_SET_0 0x1e00
+#define SPR_MPL_IPI_1_SET_1 0x1e01
+#define SPR_MPL_IPI_1_SET_2 0x1e02
+#define SPR_MPL_IPI_2 0x1d04
+#define SPR_MPL_IPI_2_SET_0 0x1d00
+#define SPR_MPL_IPI_2_SET_1 0x1d01
+#define SPR_MPL_IPI_2_SET_2 0x1d02
+#define SPR_MPL_PERF_COUNT_SET_0 0x2000
+#define SPR_MPL_PERF_COUNT_SET_1 0x2001
+#define SPR_MPL_PERF_COUNT_SET_2 0x2002
+#define SPR_MPL_UDN_ACCESS_SET_0 0x0b00
+#define SPR_MPL_UDN_ACCESS_SET_1 0x0b01
+#define SPR_MPL_UDN_ACCESS_SET_2 0x0b02
+#define SPR_MPL_UDN_AVAIL_SET_0 0x1b00
+#define SPR_MPL_UDN_AVAIL_SET_1 0x1b01
+#define SPR_MPL_UDN_AVAIL_SET_2 0x1b02
+#define SPR_MPL_UDN_COMPLETE_SET_0 0x0600
+#define SPR_MPL_UDN_COMPLETE_SET_1 0x0601
+#define SPR_MPL_UDN_COMPLETE_SET_2 0x0602
+#define SPR_MPL_UDN_FIREWALL_SET_0 0x1500
+#define SPR_MPL_UDN_FIREWALL_SET_1 0x1501
+#define SPR_MPL_UDN_FIREWALL_SET_2 0x1502
+#define SPR_MPL_UDN_TIMER_SET_0 0x1900
+#define SPR_MPL_UDN_TIMER_SET_1 0x1901
+#define SPR_MPL_UDN_TIMER_SET_2 0x1902
+#define SPR_MPL_WORLD_ACCESS_SET_0 0x2700
+#define SPR_MPL_WORLD_ACCESS_SET_1 0x2701
+#define SPR_MPL_WORLD_ACCESS_SET_2 0x2702
+#define SPR_PASS 0x2709
+#define SPR_PERF_COUNT_0 0x2005
+#define SPR_PERF_COUNT_1 0x2006
+#define SPR_PERF_COUNT_CTL 0x2007
+#define SPR_PERF_COUNT_DN_CTL 0x2008
+#define SPR_PERF_COUNT_STS 0x2009
+#define SPR_PROC_STATUS 0x2784
+#define SPR_SIM_CONTROL 0x2785
+#define SPR_SINGLE_STEP_CONTROL_0 0x0405
+#define SPR_SINGLE_STEP_CONTROL_0__CANCELED_MASK 0x1
+#define SPR_SINGLE_STEP_CONTROL_0__INHIBIT_MASK 0x2
+#define SPR_SINGLE_STEP_CONTROL_1 0x0305
+#define SPR_SINGLE_STEP_CONTROL_1__CANCELED_MASK 0x1
+#define SPR_SINGLE_STEP_CONTROL_1__INHIBIT_MASK 0x2
+#define SPR_SINGLE_STEP_CONTROL_2 0x0205
+#define SPR_SINGLE_STEP_CONTROL_2__CANCELED_MASK 0x1
+#define SPR_SINGLE_STEP_CONTROL_2__INHIBIT_MASK 0x2
+#define SPR_SINGLE_STEP_EN_0_0 0x250a
+#define SPR_SINGLE_STEP_EN_0_1 0x240a
+#define SPR_SINGLE_STEP_EN_0_2 0x230a
+#define SPR_SINGLE_STEP_EN_1_0 0x250b
+#define SPR_SINGLE_STEP_EN_1_1 0x240b
+#define SPR_SINGLE_STEP_EN_1_2 0x230b
+#define SPR_SINGLE_STEP_EN_2_0 0x250c
+#define SPR_SINGLE_STEP_EN_2_1 0x240c
+#define SPR_SINGLE_STEP_EN_2_2 0x230c
+#define SPR_SYSTEM_SAVE_0_0 0x2582
+#define SPR_SYSTEM_SAVE_0_1 0x2583
+#define SPR_SYSTEM_SAVE_0_2 0x2584
+#define SPR_SYSTEM_SAVE_0_3 0x2585
+#define SPR_SYSTEM_SAVE_1_0 0x2482
+#define SPR_SYSTEM_SAVE_1_1 0x2483
+#define SPR_SYSTEM_SAVE_1_2 0x2484
+#define SPR_SYSTEM_SAVE_1_3 0x2485
+#define SPR_SYSTEM_SAVE_2_0 0x2382
+#define SPR_SYSTEM_SAVE_2_1 0x2383
+#define SPR_SYSTEM_SAVE_2_2 0x2384
+#define SPR_SYSTEM_SAVE_2_3 0x2385
+#define SPR_TILE_COORD 0x270b
+#define SPR_TILE_RTF_HWM 0x270c
+#define SPR_TILE_TIMER_CONTROL 0x1605
+#define SPR_UDN_AVAIL_EN 0x1b05
+#define SPR_UDN_DATA_AVAIL 0x0b80
+#define SPR_UDN_DEADLOCK_TIMEOUT 0x1906
+#define SPR_UDN_DEMUX_COUNT_0 0x0b05
+#define SPR_UDN_DEMUX_COUNT_1 0x0b06
+#define SPR_UDN_DEMUX_COUNT_2 0x0b07
+#define SPR_UDN_DEMUX_COUNT_3 0x0b08
+#define SPR_UDN_DIRECTION_PROTECT 0x1505
+#define SPR_UDN_PENDING 0x0b0a
+#define SPR_WATCH_MASK 0x200a
+#define SPR_WATCH_VAL 0x200b
+
+#endif /* !defined(__ARCH_SPR_DEF_64_H__) */
+
+#endif /* !defined(__DOXYGEN__) */
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Qemu-devel] [PATCH 08/10 v12] target-tilegx: Add several helpers for instructions translation
2015-06-13 13:07 [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (6 preceding siblings ...)
2015-06-13 13:18 ` [Qemu-devel] [PATCH 06/10 v12] target-tilegx: Add special register information from Tilera Corporation Chen Gang
@ 2015-06-13 13:19 ` Chen Gang
[not found] ` <55A76DE6.4070103@hotmail.com>
2015-06-13 13:21 ` [Qemu-devel] [PATCH 09/10 v12] target-tilegx: Generate tcg instructions to finish "Hello world" Chen Gang
` (2 subsequent siblings)
10 siblings, 1 reply; 21+ messages in thread
From: Chen Gang @ 2015-06-13 13:19 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
The related instructions are exception, cntlz, cnttz, shufflebytes, and
add_saturate.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
target-tilegx/helper.c | 83 ++++++++++++++++++++++++++++++++++++++++++++++++++
target-tilegx/helper.h | 5 +++
2 files changed, 88 insertions(+)
create mode 100644 target-tilegx/helper.c
create mode 100644 target-tilegx/helper.h
diff --git a/target-tilegx/helper.c b/target-tilegx/helper.c
new file mode 100644
index 0000000..5ab41cd
--- /dev/null
+++ b/target-tilegx/helper.c
@@ -0,0 +1,83 @@
+/*
+ * QEMU TILE-Gx helpers
+ *
+ * Copyright (c) 2015 Chen Gang
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see
+ * <http://www.gnu.org/licenses/lgpl-2.1.html>
+ */
+
+#include "cpu.h"
+#include "qemu-common.h"
+#include "exec/helper-proto.h"
+
+#define SIGNBIT32 0x80000000
+
+int64_t helper_add_saturate(CPUTLGState *env, uint64_t rsrc, uint64_t rsrcb)
+{
+ uint32_t rdst = rsrc + rsrcb;
+
+ if (((rdst ^ rsrc) & SIGNBIT32) && !((rsrc ^ rsrcb) & SIGNBIT32)) {
+ rdst = ~(((int32_t)rsrc >> 31) ^ SIGNBIT32);
+ }
+
+ return (int64_t)rdst;
+}
+
+void helper_exception(CPUTLGState *env, uint32_t excp)
+{
+ CPUState *cs = CPU(tilegx_env_get_cpu(env));
+
+ cs->exception_index = excp;
+ cpu_loop_exit(cs);
+}
+
+uint64_t helper_cntlz(uint64_t arg)
+{
+ return clz64(arg);
+}
+
+uint64_t helper_cnttz(uint64_t arg)
+{
+ return ctz64(arg);
+}
+
+/*
+ * Functional Description
+ * uint64_t a = rf[SrcA];
+ * uint64_t b = rf[SrcB];
+ * uint64_t d = rf[Dest];
+ * uint64_t output = 0;
+ * unsigned int counter;
+ * for (counter = 0; counter < (WORD_SIZE / BYTE_SIZE); counter++)
+ * {
+ * int sel = getByte (b, counter) & 0xf;
+ * uint8_t byte = (sel < 8) ? getByte (d, sel) : getByte (a, (sel - 8));
+ * output = setByte (output, counter, byte);
+ * }
+ * rf[Dest] = output;
+ */
+uint64_t helper_shufflebytes(uint64_t rdst, uint64_t rsrc, uint64_t rsrcb)
+{
+ uint64_t vdst = 0;
+ int count;
+
+ for (count = 0; count < 64; count += 8) {
+ uint64_t sel = rsrcb >> count;
+ uint64_t src = (sel & 8) ? rsrc : rdst;
+ vdst |= ((src >> ((sel & 7) * 8)) & 0xff) << count;
+ }
+
+ return vdst;
+}
diff --git a/target-tilegx/helper.h b/target-tilegx/helper.h
new file mode 100644
index 0000000..1411c19
--- /dev/null
+++ b/target-tilegx/helper.h
@@ -0,0 +1,5 @@
+DEF_HELPER_2(exception, noreturn, env, i32)
+DEF_HELPER_FLAGS_1(cntlz, TCG_CALL_NO_RWG_SE, i64, i64)
+DEF_HELPER_FLAGS_1(cnttz, TCG_CALL_NO_RWG_SE, i64, i64)
+DEF_HELPER_FLAGS_3(shufflebytes, TCG_CALL_NO_RWG_SE, i64, i64, i64, i64)
+DEF_HELPER_3(add_saturate, s64, env, i64, i64)
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Qemu-devel] [PATCH 09/10 v12] target-tilegx: Generate tcg instructions to finish "Hello world"
2015-06-13 13:07 [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (7 preceding siblings ...)
2015-06-13 13:19 ` [Qemu-devel] [PATCH 08/10 v12] target-tilegx: Add several helpers for instructions translation Chen Gang
@ 2015-06-13 13:21 ` Chen Gang
[not found] ` <55A76DB1.4090302@hotmail.com>
` (2 more replies)
2015-06-13 13:22 ` [Qemu-devel] [PATCH 10/10 v12] target-tilegx: Add TILE-Gx building files Chen Gang
2015-06-18 22:02 ` [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Peter Maydell
10 siblings, 3 replies; 21+ messages in thread
From: Chen Gang @ 2015-06-13 13:21 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
Generate related tcg instructions, and qemu tilegx can finish running
"Hello world". The elf64 binary can be static or shared.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
target-tilegx/translate.c | 2966 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 2966 insertions(+)
create mode 100644 target-tilegx/translate.c
diff --git a/target-tilegx/translate.c b/target-tilegx/translate.c
new file mode 100644
index 0000000..1dd3a43
--- /dev/null
+++ b/target-tilegx/translate.c
@@ -0,0 +1,2966 @@
+/*
+ * QEMU TILE-Gx CPU
+ *
+ * Copyright (c) 2015 Chen Gang
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see
+ * <http://www.gnu.org/licenses/lgpl-2.1.html>
+ */
+
+#include "cpu.h"
+#include "qemu/log.h"
+#include "disas/disas.h"
+#include "tcg-op.h"
+#include "exec/cpu_ldst.h"
+#include "opcode_tilegx.h"
+#include "spr_def_64.h"
+
+#define FMT64X "%016" PRIx64
+#define TILEGX_TMP_REGS (TILEGX_MAX_INSTRUCTIONS_PER_BUNDLE + 1)
+
+static TCGv_ptr cpu_env;
+static TCGv cpu_pc;
+static TCGv cpu_regs[TILEGX_R_COUNT];
+static TCGv cpu_spregs[TILEGX_SPR_COUNT];
+#if defined(CONFIG_USER_ONLY)
+static TCGv_i32 cpu_excparam;
+#endif
+
+static const char * const reg_names[] = {
+ "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
+ "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15",
+ "r16", "r17", "r18", "r19", "r20", "r21", "r22", "r23",
+ "r24", "r25", "r26", "r27", "r28", "r29", "r30", "r31",
+ "r32", "r33", "r34", "r35", "r36", "r37", "r38", "r39",
+ "r40", "r41", "r42", "r43", "r44", "r45", "r46", "r47",
+ "r48", "r49", "r50", "r51", "bp", "tp", "sp", "lr"
+};
+
+static const char * const spreg_names[] = {
+ "cmpexch", "criticalsec", "simcontrol"
+};
+
+/* It is for temporary registers */
+typedef struct DisasContextTemp {
+ uint8_t idx; /* index */
+ TCGv val; /* value */
+} DisasContextTemp;
+
+/* This is the state at translation time. */
+typedef struct DisasContext {
+ uint64_t pc; /* Current pc */
+ int exception; /* Current exception */
+
+ TCGv zero; /* For zero register */
+
+ DisasContextTemp *tmp_regcur; /* Current temporary registers */
+ DisasContextTemp tmp_regs[TILEGX_TMP_REGS]; /* All temporary registers */
+ struct {
+ TCGCond cond; /* Branch condition */
+ TCGv dest; /* pc jump destination, if will jump */
+ TCGv val1; /* Firt value for condition comparing */
+ TCGv val2; /* Second value for condition comparing */
+ } jmp; /* Jump object, only once in each TB block */
+} DisasContext;
+
+#include "exec/gen-icount.h"
+
+static void gen_exception(DisasContext *dc, int num)
+{
+ TCGv_i32 tmp = tcg_const_i32(num);
+
+ gen_helper_exception(cpu_env, tmp);
+ tcg_temp_free_i32(tmp);
+}
+
+/*
+ * All exceptions which can still let working flow continue are all in pipe x1,
+ * which is the last pipe of a bundle. So it is OK to only process the first
+ * exception within a bundle.
+ */
+static void set_exception(DisasContext *dc, int num)
+{
+ if (dc->exception == TILEGX_EXCP_NONE) {
+ dc->exception = num;
+ }
+}
+
+static bool check_gr(DisasContext *dc, uint8_t reg)
+{
+ if (likely(reg < TILEGX_R_COUNT)) {
+ return true;
+ }
+
+ switch (reg) {
+ case TILEGX_R_SN:
+ case TILEGX_R_ZERO:
+ break;
+ case TILEGX_R_IDN0:
+ case TILEGX_R_IDN1:
+ set_exception(dc, TILEGX_EXCP_REG_IDN_ACCESS);
+ break;
+ case TILEGX_R_UDN0:
+ case TILEGX_R_UDN1:
+ case TILEGX_R_UDN2:
+ case TILEGX_R_UDN3:
+ set_exception(dc, TILEGX_EXCP_REG_UDN_ACCESS);
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ return false;
+}
+
+static TCGv load_zero(DisasContext *dc)
+{
+ if (TCGV_IS_UNUSED_I64(dc->zero)) {
+ dc->zero = tcg_const_i64(0);
+ }
+ return dc->zero;
+}
+
+static TCGv load_gr(DisasContext *dc, uint8_t reg)
+{
+ if (check_gr(dc, reg)) {
+ return cpu_regs[reg];
+ }
+ return load_zero(dc);
+}
+
+static TCGv dest_gr(DisasContext *dc, uint8_t rdst)
+{
+ DisasContextTemp *tmp = dc->tmp_regcur++;
+
+ /* Skip the result, mark the exception if necessary, and continue */
+ check_gr(dc, rdst);
+ assert((dc->tmp_regcur - dc->tmp_regs) < TILEGX_TMP_REGS);
+ tmp->idx = rdst;
+ tmp->val = tcg_temp_new_i64();
+ return tmp->val;
+}
+
+static void gen_atomic_excp(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
+ int excp, const char *code)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
+ code, rdst, rsrc, rsrcb);
+#if defined(CONFIG_USER_ONLY)
+ tcg_gen_movi_i32(cpu_excparam, (rdst << 16) | (rsrc << 8) | rsrcb);
+ tcg_gen_movi_i64(cpu_pc, dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
+ set_exception(dc, excp);
+#else
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+#endif
+}
+
+static void gen_swint1(struct DisasContext *dc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "swint1\n");
+
+ tcg_gen_movi_i64(cpu_pc, dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
+ set_exception(dc, TILEGX_EXCP_SYSCALL);
+}
+
+/*
+ * Many SPR reads/writes have side effects and cannot be buffered. However, they
+ * are all in the X1 pipe, which we are excuting last, therefore we need not do
+ * additional buffering.
+ */
+
+static void gen_mfspr(struct DisasContext *dc, uint8_t rdst, uint16_t imm14)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "mfspr r%d, 0x%x\n", rdst, imm14);
+
+ if (!check_gr(dc, rdst)) {
+ return;
+ }
+
+ switch (imm14) {
+ case SPR_CMPEXCH_VALUE:
+ tcg_gen_mov_i64(cpu_regs[rdst], cpu_spregs[TILEGX_SPR_CMPEXCH]);
+ return;
+ case SPR_INTERRUPT_CRITICAL_SECTION:
+ tcg_gen_mov_i64(cpu_regs[rdst], cpu_spregs[TILEGX_SPR_CRITICAL_SEC]);
+ return;
+ case SPR_SIM_CONTROL:
+ tcg_gen_mov_i64(cpu_regs[rdst], cpu_spregs[TILEGX_SPR_SIM_CONTROL]);
+ return;
+ default:
+ qemu_log_mask(LOG_UNIMP, "UNIMP mfspr 0x%x.\n", imm14);
+ }
+
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+}
+
+static void gen_mtspr(struct DisasContext *dc, uint8_t rsrc, uint16_t imm14)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "mtspr 0x%x, r%d\n", imm14, rsrc);
+
+ switch (imm14) {
+ case SPR_CMPEXCH_VALUE:
+ tcg_gen_mov_i64(cpu_spregs[TILEGX_SPR_CMPEXCH], load_gr(dc, rsrc));
+ return;
+ case SPR_INTERRUPT_CRITICAL_SECTION:
+ tcg_gen_mov_i64(cpu_spregs[TILEGX_SPR_CRITICAL_SEC], load_gr(dc, rsrc));
+ return;
+ case SPR_SIM_CONTROL:
+ tcg_gen_mov_i64(cpu_spregs[TILEGX_SPR_SIM_CONTROL], load_gr(dc, rsrc));
+ return;
+ default:
+ qemu_log_mask(LOG_UNIMP, "UNIMP mtspr 0x%x.\n", imm14);
+ }
+
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+}
+
+static void extract_v1(TCGv out, TCGv in, unsigned byte)
+{
+ tcg_gen_shri_i64(out, in, byte * 8);
+ tcg_gen_ext8u_i64(out, out);
+}
+
+static void insert_v1(TCGv out, TCGv in, unsigned byte)
+{
+ tcg_gen_deposit_i64(out, out, in, byte * 8, 8);
+}
+
+static void gen_v1cmpi(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int8_t imm8,
+ TCGCond cond, const char *code)
+{
+ int count;
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv vsrc = load_gr(dc, rsrc);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, %d\n",
+ code, rdst, rsrc, imm8);
+
+ tcg_gen_movi_i64(vdst, 0);
+ for (count = 0; count < 8; count++) {
+ extract_v1(tmp, vsrc, count);
+ tcg_gen_setcondi_i64(cond, tmp, tmp, imm8);
+ insert_v1(vdst, tmp, count);
+ }
+ tcg_temp_free_i64(tmp);
+}
+
+static void gen_v1cmp(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
+ TCGCond cond, const char *code)
+{
+ int count;
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv vsrc = load_gr(dc, rsrc);
+ TCGv vsrcb = load_gr(dc, rsrcb);
+ TCGv tmp = tcg_temp_new_i64();
+ TCGv tmp2 = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
+ code, rdst, rsrc, rsrcb);
+
+ tcg_gen_movi_i64(vdst, 0);
+ for (count = 0; count < 8; count++) {
+ extract_v1(tmp, vsrc, count);
+ extract_v1(tmp2, vsrcb, count);
+ tcg_gen_setcond_i64(cond, tmp, tmp, tmp2);
+ insert_v1(vdst, tmp, count);
+ }
+ tcg_temp_free_i64(tmp2);
+ tcg_temp_free_i64(tmp);
+}
+
+static void gen_v1shrui(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t shamt)
+{
+ int count;
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv vsrc = load_gr(dc, rsrc);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "v1shrui r%d, r%d, %u\n",
+ rdst, rsrc, shamt);
+
+ shamt &= 7;
+ tcg_gen_movi_i64(vdst, 0);
+ for (count = 0; count < 8; count++) {
+ extract_v1(tmp, vsrc, count);
+ tcg_gen_shri_i64(tmp, tmp, shamt);
+ insert_v1(vdst, tmp, count);
+ }
+ tcg_temp_free_i64(tmp);
+}
+
+/*
+ * Description
+ *
+ * Interleave the four low-order bytes of the first operand with the four
+ * low-order bytes of the second operand. The low-order byte of the result will
+ * be the low-order byte of the second operand. For example if the first operand
+ * contains the packed bytes {A7,A6,A5,A4,A3,A2,A1,A0} and the second operand
+ * contains the packed bytes {B7,B6,B5,B4,B3,B2,B1,B0} then the result will be
+ * {A3,B3,A2,B2,A1,B1,A0,B0}.
+ *
+ * Functional Description
+ *
+ * uint64_t output = 0;
+ * uint32_t counter;
+ * for (counter = 0; counter < (WORD_SIZE / BYTE_SIZE); counter++)
+ * {
+ * bool asel = ((counter & 1) == 1);
+ * int in_sel = 0 + counter / 2;
+ * int8_t srca = getByte (rf[SrcA], in_sel);
+ * int8_t srcb = getByte (rf[SrcB], in_sel);
+ * output = setByte (output, counter, (asel ? srca : srcb));
+ * }
+ * rf[Dest] = output;
+ */
+static void gen_v1int_l(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ int count;
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv vsrc = load_gr(dc, rsrc);
+ TCGv vsrcb = load_gr(dc, rsrcb);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "v1int_l r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+
+ tcg_gen_movi_i64(vdst, 0);
+ for (count = 0; count < 4; count++) {
+ extract_v1(tmp, vsrc, count);
+ insert_v1(vdst, tmp, 2 * count + 1);
+ extract_v1(tmp, vsrcb, count);
+ insert_v1(vdst, tmp, 2 * count);
+ }
+ tcg_temp_free_i64(tmp);
+}
+
+static void gen_v4int_l(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "v4int_l r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+ tcg_gen_deposit_i64(dest_gr(dc, rdst), load_gr(dc, rsrcb),
+ load_gr(dc, rsrc), 32, 32);
+}
+
+static void gen_cmpi(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int8_t imm8,
+ TCGCond cond, const char *code)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, %d\n",
+ code, rdst, rsrc, imm8);
+ tcg_gen_setcondi_i64(cond,
+ dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
+}
+
+static void gen_cmp(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
+ TCGCond cond, const char *code)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
+ code, rdst, rsrc, rsrcb);
+ tcg_gen_setcond_i64(cond, dest_gr(dc, rdst), load_gr(dc, rsrc),
+ load_gr(dc, rsrcb));
+}
+
+static void gen_cmov(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
+ TCGCond cond, const char *code)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
+ code, rdst, rsrc, rsrcb);
+ tcg_gen_movcond_i64(cond, dest_gr(dc, rdst), load_gr(dc, rsrc),
+ load_zero(dc), load_gr(dc, rsrcb), load_gr(dc, rdst));
+}
+
+static void gen_menz(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
+ TCGCond cond, const char *code)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
+ code, rdst, rsrc, rsrcb);
+
+ tcg_gen_movcond_i64(cond, dest_gr(dc, rdst), load_gr(dc, rsrc),
+ load_zero(dc), load_gr(dc, rsrcb), load_zero(dc));
+}
+
+static void gen_add(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "add r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_add_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_addimm(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int16_t imm)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "add(l)i r%d, r%d, %d\n",
+ rdst, rsrc, imm);
+ tcg_gen_addi_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), imm);
+}
+
+static void gen_addxsc(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "addxsc r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+ gen_helper_add_saturate(dest_gr(dc, rdst), cpu_env,
+ load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_addx(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "addx r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_add_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb));
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+static void gen_addximm(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int16_t imm)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "addx(l)i r%d, r%d, %d\n",
+ rdst, rsrc, imm);
+ tcg_gen_addi_i64(vdst, load_gr(dc, rsrc), imm);
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+static void gen_sub(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "sub r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_sub_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_subx(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "subx r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_sub_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb));
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+/*
+ * uint64_t mask = 0;
+ * int64_t background = ((rf[SrcA] >> BFEnd) & 1) ? -1ULL : 0ULL;
+ * mask = ((-1ULL) ^ ((-1ULL << ((BFEnd - BFStart) & 63)) << 1));
+ * uint64_t rot_src = (((uint64_t) rf[SrcA]) >> BFStart)
+ * | (rf[SrcA] << (64 - BFStart));
+ * rf[Dest] = (rot_src & mask) | (background & ~mask);
+ */
+static void gen_bfexts(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc,
+ uint8_t start, uint8_t end)
+{
+ uint64_t mask = (-1ULL) ^ ((-1ULL << ((end - start) & 63)) << 1);
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "bfexts r%d, r%d, %d, %d\n",
+ rdst, rsrc, start, end);
+
+ tcg_gen_rotri_i64(vdst, load_gr(dc, rsrc), start);
+ tcg_gen_andi_i64(vdst, vdst, mask);
+
+ tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), end);
+ tcg_gen_andi_i64(tmp, tmp, 1);
+ tcg_gen_neg_i64(tmp, tmp);
+ tcg_gen_andi_i64(tmp, tmp, ~mask);
+ tcg_gen_or_i64(vdst, vdst, tmp);
+
+ tcg_temp_free_i64(tmp);
+}
+
+/*
+ * The related functional description for bfextu in isa document:
+ *
+ * uint64_t mask = 0;
+ * mask = (-1ULL) ^ ((-1ULL << ((BFEnd - BFStart) & 63)) << 1);
+ * uint64_t rot_src = (((uint64_t) rf[SrcA]) >> BFStart)
+ * | (rf[SrcA] << (64 - BFStart));
+ * rf[Dest] = rot_src & mask;
+ */
+static void gen_bfextu(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc,
+ uint8_t start, uint8_t end)
+{
+ uint64_t mask = (-1ULL) ^ ((-1ULL << ((end - start) & 63)) << 1);
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "bfextu r%d, r%d, %d, %d\n",
+ rdst, rsrc, start, end);
+
+ tcg_gen_rotri_i64(vdst, load_gr(dc, rsrc), start);
+ tcg_gen_andi_i64(vdst, vdst, mask);
+}
+
+/*
+ * mask = (start <= end) ? ((-1ULL << start) ^ ((-1ULL << end) << 1))
+ * : ((-1ULL << start) | (-1ULL >> (63 - end)));
+ * uint64_t rot_src = (rf[SrcA] << start)
+ * | ((uint64_t) rf[SrcA] >> (64 - start));
+ * rf[Dest] = (rot_src & mask) | (rf[Dest] & (-1ULL ^ mask));
+ */
+static void gen_bfins(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc,
+ uint8_t start, uint8_t end)
+{
+ uint64_t mask = (start <= end) ? ((-1ULL << start) ^ ((-1ULL << end) << 1))
+ : ((-1ULL << start) | (-1ULL >> (63 - end)));
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "bfins r%d, r%d, %d, %d\n",
+ rdst, rsrc, start, end);
+
+ tcg_gen_rotli_i64(tmp, load_gr(dc, rsrc), start);
+
+ tcg_gen_andi_i64(tmp, tmp, mask);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rdst), -1ULL ^ mask);
+ tcg_gen_or_i64(vdst, vdst, tmp);
+
+ tcg_temp_free_i64(tmp);
+}
+
+static void gen_or(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "or r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_or_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_ori(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int8_t imm8)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "ori r%d, r%d, %d\n", rdst, rsrc, imm8);
+ tcg_gen_ori_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
+}
+
+static void gen_xor(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "xor r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_xor_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_xori(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int8_t imm8)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "xori r%d, r%d, %d\n", rdst, rsrc, imm8);
+ tcg_gen_xori_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
+}
+
+static void gen_nor(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "nor r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_nor_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_and(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "and r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_and_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_andi(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int8_t imm8)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "andi r%d, r%d, %d\n", rdst, rsrc, imm8);
+ tcg_gen_andi_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
+}
+
+static void gen_mulx(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
+ bool add, const char *code)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
+ code, rdst, rsrc, rsrcb);
+
+ tcg_gen_mul_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb));
+ if (add) {
+ tcg_gen_add_i64(vdst, load_gr(dc, rdst), vdst);
+ }
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+static void gen_mul(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
+ bool add, bool high, bool sign,
+ bool highb, bool signb,
+ const char *code)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv vsrc = load_gr(dc, rsrc);
+ TCGv vsrcb = load_gr(dc, rsrcb);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
+ code, rdst, rsrc, rsrcb);
+
+ if (high) {
+ tcg_gen_shri_i64(tmp, vsrc, 32);
+ } else {
+ tcg_gen_andi_i64(tmp, vsrc, 0xffffffff);
+ }
+ if (sign) {
+ tcg_gen_ext32s_i64(tmp, tmp);
+ }
+
+ if (highb) {
+ tcg_gen_shri_i64(vdst, vsrcb, 32);
+ } else {
+ tcg_gen_andi_i64(vdst, vsrcb, 0xffffffff);
+ }
+ if (signb) {
+ tcg_gen_ext32s_i64(vdst, vdst);
+ }
+
+ tcg_gen_mul_i64(vdst, tmp, vdst);
+
+ if (add) {
+ tcg_gen_add_i64(vdst, load_gr(dc, rdst), vdst);
+ }
+
+ tcg_temp_free_i64(tmp);
+}
+
+static void gen_shlx(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shlx r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 31);
+ tcg_gen_shl_i64(vdst, load_gr(dc, rsrc), vdst);
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+static void gen_shl(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shl r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 63);
+ tcg_gen_shl_i64(vdst, load_gr(dc, rsrc), vdst);
+}
+
+static void gen_shli(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t shamt)
+{
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shli r%d, r%d, %u\n", rdst, rsrc, shamt);
+ tcg_gen_shli_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), shamt);
+}
+
+static void gen_shlxi(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t shamt)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shlxi r%d, r%d, %u\n", rdst, rsrc, shamt);
+ tcg_gen_shli_i64(vdst, load_gr(dc, rsrc), shamt & 31);
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+static void gen_shladd(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
+ uint8_t shift, bool cast)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shl%dadd%s r%d, r%d, r%d\n",
+ shift, cast ? "x" : "", rdst, rsrc, rsrcb);
+ tcg_gen_shli_i64(vdst, load_gr(dc, rsrc), shift);
+ tcg_gen_add_i64(vdst, vdst, load_gr(dc, rsrcb));
+ if (cast) {
+ tcg_gen_ext32s_i64(vdst, vdst);
+ }
+}
+
+static void gen_shl16insli(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint16_t uimm16)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shl16insli r%d, r%d, 0x%x\n",
+ rdst, rsrc, uimm16);
+ tcg_gen_shli_i64(vdst, load_gr(dc, rsrc), 16);
+ tcg_gen_ori_i64(vdst, vdst, uimm16);
+}
+
+static void gen_shrs(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shrs r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 63);
+ tcg_gen_sar_i64(vdst, load_gr(dc, rsrc), vdst);
+}
+
+static void gen_shrux(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shrux r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 31);
+ tcg_gen_andi_i64(tmp, load_gr(dc, rsrc), 0xffffffff);
+ tcg_gen_shr_i64(vdst, tmp, vdst);
+ tcg_gen_ext32s_i64(vdst, vdst);
+
+ tcg_temp_free_i64(tmp);
+}
+
+static void gen_shru(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shru r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 63);
+ tcg_gen_shr_i64(vdst, load_gr(dc, rsrc), vdst);
+}
+
+static void gen_shufflebytes(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shufflebytes r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+ gen_helper_shufflebytes(dest_gr(dc, rdst), load_gr(dc, rdst),
+ load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_shrsi(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t shamt)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shrsi r%d, r%d, %u\n", rdst, rsrc, shamt);
+ tcg_gen_sari_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), shamt);
+}
+
+static void gen_shrui(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t shamt)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shrui r%d, r%d, %u\n", rdst, rsrc, shamt);
+ tcg_gen_shri_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), shamt);
+}
+
+static void gen_shruxi(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t shamt)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shruxi r%d, r%d, %u\n",
+ rdst, rsrc, shamt);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrc), 0xffffffff);
+ tcg_gen_shri_i64(vdst, vdst, shamt & 31);
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+static void gen_rotl(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "rotl r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_rotl_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_rotli(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t shamt)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "rotli r%d, r%d, %u\n",
+ rdst, rsrc, shamt);
+ tcg_gen_rotli_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), shamt);
+}
+
+static void gen_dblalign(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv mask = tcg_temp_new_i64();
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "dblalign r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+
+ tcg_gen_andi_i64(mask, load_gr(dc, rsrcb), 7);
+ tcg_gen_shli_i64(mask, mask, 3);
+ tcg_gen_shr_i64(vdst, load_gr(dc, rdst), mask);
+
+ tcg_gen_xori_i64(mask, mask, 63);
+ tcg_gen_shl_i64(mask, load_gr(dc, rsrc), mask);
+ tcg_gen_shli_i64(mask, mask, 1);
+
+ tcg_gen_or_i64(vdst, vdst, mask);
+
+ tcg_temp_free_i64(tmp);
+ tcg_temp_free_i64(mask);
+}
+
+static void gen_cntlz(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "cntlz r%d, r%d\n", rdst, rsrc);
+ gen_helper_cntlz(dest_gr(dc, rdst), load_gr(dc, rsrc));
+}
+
+static void gen_cnttz(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "ctz r%d, r%d\n", rdst, rsrc);
+ gen_helper_cnttz(dest_gr(dc, rdst), load_gr(dc, rsrc));
+}
+
+static void gen_ld(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc,
+ TCGMemOp ops, const char *code)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d\n", code, rdst, rsrc);
+ tcg_gen_qemu_ld_i64(dest_gr(dc, rdst), load_gr(dc, rsrc),
+ MMU_USER_IDX, ops);
+}
+
+static void gen_ld_add(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int8_t imm8,
+ TCGMemOp ops, const char *code)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, %d\n",
+ code, rdst, rsrc, imm8);
+
+ tcg_gen_qemu_ld_i64(dest_gr(dc, rdst), load_gr(dc, rsrc),
+ MMU_USER_IDX, ops);
+ tcg_gen_addi_i64(dest_gr(dc, rsrc), load_gr(dc, rsrc), imm8);
+}
+
+static void gen_st(struct DisasContext *dc,
+ uint8_t rsrc, uint8_t rsrcb,
+ TCGMemOp ops, const char *code)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d\n", code, rsrc, rsrcb);
+ tcg_gen_qemu_st_i64(load_gr(dc, rsrcb), load_gr(dc, rsrc),
+ MMU_USER_IDX, ops);
+}
+
+static void gen_st_add(struct DisasContext *dc,
+ uint8_t rsrc, uint8_t rsrcb, uint8_t imm8,
+ TCGMemOp ops, const char *code)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, %d\n",
+ code, rsrc, rsrcb, imm8);
+ tcg_gen_qemu_st_i64(load_gr(dc, rsrcb), load_gr(dc, rsrc),
+ MMU_USER_IDX, ops);
+ tcg_gen_addi_i64(dest_gr(dc, rsrc), load_gr(dc, rsrc), imm8);
+}
+
+static void gen_lnk(struct DisasContext *dc, uint8_t rdst)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "lnk r%d\n", rdst);
+ tcg_gen_movi_i64(dest_gr(dc, rdst), dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
+}
+
+static void gen_b(struct DisasContext *dc,
+ uint8_t rsrc, int32_t off, TCGCond cond, const char *code)
+{
+ uint64_t pos = dc->pc + (int64_t)off * TILEGX_BUNDLE_SIZE_IN_BYTES;
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, %d ([" TARGET_FMT_lx "] %s)\n",
+ code, rsrc, off, pos, lookup_symbol(pos));
+
+ dc->jmp.dest = tcg_temp_new_i64();
+ dc->jmp.val1 = tcg_temp_new_i64();
+ dc->jmp.val2 = tcg_temp_new_i64();
+
+ dc->jmp.cond = cond;
+ tcg_gen_movi_i64(dc->jmp.dest, pos);
+ tcg_gen_mov_i64(dc->jmp.val1, load_gr(dc, rsrc));
+ tcg_gen_movi_i64(dc->jmp.val2, 0);
+}
+
+static void gen_blb(struct DisasContext *dc, uint8_t rsrc, int32_t off,
+ TCGCond cond, const char *code)
+{
+ uint64_t pos = dc->pc + (int64_t)off * TILEGX_BUNDLE_SIZE_IN_BYTES;
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, %d ([" TARGET_FMT_lx "] %s)\n",
+ code, rsrc, off, pos, lookup_symbol(pos));
+
+ dc->jmp.dest = tcg_temp_new_i64();
+ dc->jmp.val1 = tcg_temp_new_i64();
+ dc->jmp.val2 = tcg_temp_new_i64();
+
+ dc->jmp.cond = cond;
+ tcg_gen_movi_i64(dc->jmp.dest, pos);
+ tcg_gen_mov_i64(dc->jmp.val1, load_gr(dc, rsrc));
+ tcg_gen_andi_i64(dc->jmp.val1, dc->jmp.val1, 1ULL);
+ tcg_gen_movi_i64(dc->jmp.val2, 0);
+}
+
+/* For memory fence */
+static void gen_mf(struct DisasContext *dc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "mf\n");
+ /* FIXME: Do we need any implementation for it? I guess no. */
+}
+
+/* Write hitt 64 bytes. It is about cache. */
+static void gen_wh64(struct DisasContext *dc, uint8_t rsrc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "wh64 r%d\n", rsrc);
+ /* FIXME: Do we need any implementation for it? I guess no. */
+}
+
+static void gen_jr(struct DisasContext *dc, uint8_t rsrc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "jr(p) r%d\n", rsrc);
+
+ dc->jmp.dest = tcg_temp_new_i64();
+
+ dc->jmp.cond = TCG_COND_ALWAYS;
+ tcg_gen_andi_i64(dc->jmp.dest, load_gr(dc, rsrc), ~(sizeof(uint64_t) - 1));
+}
+
+static void gen_jalr(struct DisasContext *dc, uint8_t rsrc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "jalr(p) r%d\n", rsrc);
+
+ dc->jmp.dest = tcg_temp_new_i64();
+ tcg_gen_movi_i64(dest_gr(dc, TILEGX_R_LR),
+ dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
+
+ dc->jmp.cond = TCG_COND_ALWAYS;
+ tcg_gen_andi_i64(dc->jmp.dest, load_gr(dc, rsrc), ~(sizeof(uint64_t) - 1));
+}
+
+static void gen_j(struct DisasContext *dc, int off)
+{
+ uint64_t pos = dc->pc + (int64_t)off * TILEGX_BUNDLE_SIZE_IN_BYTES;
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "j %d ([" TARGET_FMT_lx "] %s)\n",
+ off, pos, lookup_symbol(pos));
+
+ dc->jmp.dest = tcg_temp_new_i64();
+
+ dc->jmp.cond = TCG_COND_ALWAYS;
+ tcg_gen_movi_i64(dc->jmp.dest, pos);
+}
+
+static void gen_jal(struct DisasContext *dc, int off)
+{
+ uint64_t pos = dc->pc + (int64_t)off * TILEGX_BUNDLE_SIZE_IN_BYTES;
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "jal %d ([" TARGET_FMT_lx "] %s)\n",
+ off, pos, lookup_symbol(pos));
+
+ dc->jmp.dest = tcg_temp_new_i64();
+ tcg_gen_movi_i64(dest_gr(dc, TILEGX_R_LR),
+ dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
+
+ dc->jmp.cond = TCG_COND_ALWAYS;
+ tcg_gen_movi_i64(dc->jmp.dest, pos);
+}
+
+static void decode_rrr_0_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case ADD_RRR_0_OPCODE_Y0:
+ gen_add(dc, rdst, rsrc, rsrcb);
+ return;
+ case ADDX_RRR_0_OPCODE_Y0:
+ gen_addx(dc, rdst, rsrc, rsrcb);
+ return;
+ case SUBX_RRR_0_OPCODE_Y0:
+ gen_subx(dc, rdst, rsrc, rsrcb);
+ return;
+ case SUB_RRR_0_OPCODE_Y0:
+ gen_sub(dc, rdst, rsrc, rsrcb);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_u_opcode_ex_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_UnaryOpcodeExtension_Y0(bundle)) {
+ case CNTLZ_UNARY_OPCODE_Y0:
+ gen_cntlz(dc, rdst, rsrc);
+ return;
+ case CNTTZ_UNARY_OPCODE_Y0:
+ gen_cnttz(dc, rdst, rsrc);
+ return;
+ case FNOP_UNARY_OPCODE_Y0:
+ case NOP_UNARY_OPCODE_Y0:
+ if (!rsrc && !rdst) {
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "(f)nop\n");
+ return;
+ }
+ /* Fall through */
+ case FSINGLE_PACK1_UNARY_OPCODE_Y0:
+ case PCNT_UNARY_OPCODE_Y0:
+ case REVBITS_UNARY_OPCODE_Y0:
+ case REVBYTES_UNARY_OPCODE_Y0:
+ case TBLIDXB0_UNARY_OPCODE_Y0:
+ case TBLIDXB1_UNARY_OPCODE_Y0:
+ case TBLIDXB2_UNARY_OPCODE_Y0:
+ case TBLIDXB3_UNARY_OPCODE_Y0:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP decode_u_opcode_ex_y0, [" FMT64X "]\n", bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_rrr_1_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case UNARY_RRR_1_OPCODE_Y0:
+ return decode_u_opcode_ex_y0(dc, bundle);
+ case SHL1ADD_RRR_1_OPCODE_Y0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 1, false);
+ return;
+ case SHL2ADD_RRR_1_OPCODE_Y0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 2, false);
+ return;
+ case SHL3ADD_RRR_1_OPCODE_Y0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 3, false);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_rrr_2_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case CMPLES_RRR_2_OPCODE_Y0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE, "cmples");
+ return;
+ case CMPLEU_RRR_2_OPCODE_Y0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU, "cmpleu");
+ return;
+ case CMPLTS_RRR_2_OPCODE_Y0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT, "cmplts");
+ return;
+ case CMPLTU_RRR_2_OPCODE_Y0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU, "cmpltu");
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_rrr_3_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case CMPEQ_RRR_3_OPCODE_Y0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "cmpeq");
+ return;
+ case CMPNE_RRR_3_OPCODE_Y0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "cmpne");
+ return;
+ case MULAX_RRR_3_OPCODE_Y0:
+ gen_mulx(dc, rdst, rsrc, rsrcb, true, "mulax");
+ return;
+ case MULX_RRR_3_OPCODE_Y0:
+ gen_mulx(dc, rdst, rsrc, rsrcb, false, "mulx");
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_rrr_4_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case CMOVNEZ_RRR_4_OPCODE_Y0:
+ gen_cmov(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "cmovnez");
+ return;
+ case CMOVEQZ_RRR_4_OPCODE_Y0:
+ gen_cmov(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "cmoveqz");
+ return;
+ case MNZ_RRR_4_OPCODE_Y0:
+ gen_menz(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "mnz");
+ return;
+ case MZ_RRR_4_OPCODE_Y0:
+ gen_menz(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "mz");
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_rrr_5_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case OR_RRR_5_OPCODE_Y0:
+ gen_or(dc, rdst, rsrc, rsrcb);
+ return;
+ case AND_RRR_5_OPCODE_Y0:
+ gen_and(dc, rdst, rsrc, rsrcb);
+ return;
+ case NOR_RRR_5_OPCODE_Y0:
+ gen_nor(dc, rdst, rsrc, rsrcb);
+ return;
+ case XOR_RRR_5_OPCODE_Y0:
+ gen_xor(dc, rdst, rsrc, rsrcb);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_rrr_6_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case ROTL_RRR_6_OPCODE_Y0:
+ gen_rotl(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHL_RRR_6_OPCODE_Y0:
+ gen_shl(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRS_RRR_6_OPCODE_Y0:
+ gen_shrs(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRU_RRR_6_OPCODE_Y0:
+ gen_shru(dc, rdst, rsrc, rsrcb);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_rrr_9_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case MULA_HU_HU_RRR_9_OPCODE_Y0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, true, false, true, false, "mula_hu_hu");
+ return;
+ case MULA_LU_LU_RRR_9_OPCODE_Y0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, false, false, false, false, "mula_lu_lu");
+ return;
+ case MULA_HS_HS_RRR_9_OPCODE_Y0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, true, true, true, true, "mula_hs_hs");
+ return;
+ case MULA_LS_LS_RRR_9_OPCODE_Y0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, false, true, false, true, "mula_ls_ls");
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_shift_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t shamt = get_ShAmt_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_ShiftOpcodeExtension_Y0(bundle)) {
+ case ROTLI_SHIFT_OPCODE_Y0:
+ gen_rotli(dc, rdst, rsrc, shamt);
+ return;
+ case SHLI_SHIFT_OPCODE_Y0:
+ gen_shli(dc, rdst, rsrc, shamt);
+ return;
+ case SHRUI_SHIFT_OPCODE_Y0:
+ gen_shrui(dc, rdst, rsrc, shamt);
+ return;
+ case SHRSI_SHIFT_OPCODE_Y0:
+ gen_shrsi(dc, rdst, rsrc, shamt);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_rrr_0_opcode_y1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y1(bundle);
+ uint8_t rsrcb = get_SrcB_Y1(bundle);
+ uint8_t rdst = get_Dest_Y1(bundle);
+
+ switch (get_RRROpcodeExtension_Y1(bundle)) {
+ case ADDX_SPECIAL_0_OPCODE_Y1:
+ gen_addx(dc, rdst, rsrc, rsrcb);
+ return;
+ case ADD_SPECIAL_0_OPCODE_Y1:
+ gen_add(dc, rdst, rsrc, rsrcb);
+ return;
+ case SUBX_RRR_0_OPCODE_Y1:
+ gen_subx(dc, rdst, rsrc, rsrcb);
+ return;
+ case SUB_RRR_0_OPCODE_Y1:
+ gen_sub(dc, rdst, rsrc, rsrcb);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_u_opcode_ex_y1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y1(bundle);
+ uint8_t rdst = get_Dest_Y1(bundle);
+
+ switch (get_UnaryOpcodeExtension_Y1(bundle)) {
+ case NOP_UNARY_OPCODE_Y1:
+ case FNOP_UNARY_OPCODE_Y1:
+ if (!rsrc && !rdst) {
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "(f)nop\n");
+ return;
+ }
+ break;
+ case JALRP_UNARY_OPCODE_Y1:
+ case JALR_UNARY_OPCODE_Y1:
+ if (!rdst) {
+ gen_jalr(dc, rsrc);
+ return;
+ }
+ break;
+ case JR_UNARY_OPCODE_Y1:
+ case JRP_UNARY_OPCODE_Y1:
+ if (!rdst) {
+ gen_jr(dc, rsrc);
+ return;
+ }
+ break;
+ case LNK_UNARY_OPCODE_Y1:
+ if (!rsrc) {
+ gen_lnk(dc, rdst);
+ return;
+ }
+ break;
+ case ILL_UNARY_OPCODE_Y1:
+ break;
+ default:
+ g_assert_not_reached();
+ }
+
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP decode_u_opcode_ex_y1, [" FMT64X "]\n", bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+}
+
+static void decode_rrr_1_opcode_y1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y1(bundle);
+ uint8_t rsrcb = get_SrcB_Y1(bundle);
+ uint8_t rdst = get_Dest_Y1(bundle);
+
+ switch (get_RRROpcodeExtension_Y1(bundle)) {
+ case UNARY_RRR_1_OPCODE_Y1:
+ return decode_u_opcode_ex_y1(dc, bundle);
+ case SHL1ADD_RRR_1_OPCODE_Y1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 1, false);
+ return;
+ case SHL2ADD_RRR_1_OPCODE_Y1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 2, false);
+ return;
+ case SHL3ADD_RRR_1_OPCODE_Y1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 3, false);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_rrr_2_opcode_y1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y1(bundle);
+ uint8_t rsrcb = get_SrcB_Y1(bundle);
+ uint8_t rdst = get_Dest_Y1(bundle);
+
+ switch (get_RRROpcodeExtension_Y1(bundle)) {
+ case CMPLES_RRR_2_OPCODE_Y1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE, "cmples");
+ return;
+ case CMPLEU_RRR_2_OPCODE_Y1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU, "cmpleu");
+ return;
+ case CMPLTS_RRR_2_OPCODE_Y1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT, "cmplts");
+ return;
+ case CMPLTU_RRR_2_OPCODE_Y1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU, "cmpltu");
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_rrr_3_opcode_y1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y1(bundle);
+ uint8_t rsrcb = get_SrcB_Y1(bundle);
+ uint8_t rdst = get_Dest_Y1(bundle);
+
+ switch (get_RRROpcodeExtension_Y1(bundle)) {
+ case CMPEQ_RRR_3_OPCODE_Y1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "cmpeq");
+ return;
+ case CMPNE_RRR_3_OPCODE_Y1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "cmpne");
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_rrr_5_opcode_y1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y1(bundle);
+ uint8_t rsrcb = get_SrcB_Y1(bundle);
+ uint8_t rdst = get_Dest_Y1(bundle);
+
+ switch (get_RRROpcodeExtension_Y1(bundle)) {
+ case OR_RRR_5_OPCODE_Y1:
+ gen_or(dc, rdst, rsrc, rsrcb);
+ return;
+ case AND_RRR_5_OPCODE_Y1:
+ gen_and(dc, rdst, rsrc, rsrcb);
+ return;
+ case NOR_RRR_5_OPCODE_Y1:
+ gen_nor(dc, rdst, rsrc, rsrcb);
+ return;
+ case XOR_RRR_5_OPCODE_Y1:
+ gen_xor(dc, rdst, rsrc, rsrcb);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_shift_opcode_y1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y1(bundle);
+ uint8_t rdst = get_Dest_Y1(bundle);
+ uint8_t shamt = get_ShAmt_Y1(bundle);
+
+ switch (get_RRROpcodeExtension_Y1(bundle)) {
+ case ROTLI_SHIFT_OPCODE_Y1:
+ gen_rotli(dc, rdst, rsrc, shamt);
+ return;
+ case SHLI_SHIFT_OPCODE_Y1:
+ gen_shli(dc, rdst, rsrc, shamt);
+ return;
+ case SHRSI_SHIFT_OPCODE_Y1:
+ gen_shrsi(dc, rdst, rsrc, shamt);
+ return;
+ case SHRUI_SHIFT_OPCODE_Y1:
+ gen_shrui(dc, rdst, rsrc, shamt);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_ldst0_opcode_y2(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrca = get_SrcA_Y2(bundle);
+ uint8_t rsrcbdst = get_SrcBDest_Y2(bundle);
+
+ switch (get_Mode(bundle)) {
+ case MODE_OPCODE_YA2:
+ gen_ld(dc, rsrcbdst, rsrca, MO_SB, "ld1s");
+ return;
+ case MODE_OPCODE_YC2:
+ gen_st(dc, rsrca, rsrcbdst, MO_UB, "st1");
+ return;
+ case MODE_OPCODE_YB2:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP ldst0_opcode_y2, [" FMT64X "]\n", bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_ldst1_opcode_y2(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = (uint8_t)get_SrcA_Y2(bundle);
+ uint8_t rsrcbdst = (uint8_t)get_SrcBDest_Y2(bundle);
+
+ switch (get_Mode(bundle)) {
+ case MODE_OPCODE_YA2:
+ if (rsrcbdst == TILEGX_R_ZERO) {
+ /* Need nothing */
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "prefetch r%d\n", rsrc);
+ return;
+ }
+ gen_ld(dc, rsrcbdst, rsrc, MO_UB, "ld1u");
+ return;
+ case MODE_OPCODE_YB2:
+ gen_ld(dc, rsrcbdst, rsrc, MO_LESL, "ld4s");
+ return;
+ case MODE_OPCODE_YC2:
+ gen_st(dc, rsrc, rsrcbdst, MO_LEUW, "st2");
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_ldst2_opcode_y2(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y2(bundle);
+ uint8_t rsrcbdst = get_SrcBDest_Y2(bundle);
+
+ switch (get_Mode(bundle)) {
+ case MODE_OPCODE_YC2:
+ gen_st(dc, rsrc, rsrcbdst, MO_LEUL, "st4");
+ return;
+ case MODE_OPCODE_YB2:
+ gen_ld(dc, rsrcbdst, rsrc, MO_LEUL, "ld4u");
+ return;
+ case MODE_OPCODE_YA2:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP ldst2_opcode_y2, [" FMT64X "]\n", bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_ldst3_opcode_y2(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrca = get_SrcA_Y2(bundle);
+ uint8_t rsrcbdst = get_SrcBDest_Y2(bundle);
+
+ switch (get_Mode(bundle)) {
+ case MODE_OPCODE_YA2:
+ gen_ld(dc, rsrcbdst, rsrca, MO_LEUW, "ld2u");
+ return;
+ case MODE_OPCODE_YB2:
+ gen_ld(dc, rsrcbdst, rsrca, MO_LEQ, "ld(na)");
+ return;
+ case MODE_OPCODE_YC2:
+ gen_st(dc, rsrca, rsrcbdst, MO_LEQ, "st");
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_bf_opcode_x0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X0(bundle);
+ uint8_t rdst = get_Dest_X0(bundle);
+ uint8_t start = get_BFStart_X0(bundle);
+ uint8_t end = get_BFEnd_X0(bundle);
+
+ switch (get_BFOpcodeExtension_X0(bundle)) {
+ case BFEXTS_BF_OPCODE_X0:
+ gen_bfexts(dc, rdst, rsrc, start, end);
+ return;
+ case BFEXTU_BF_OPCODE_X0:
+ gen_bfextu(dc, rdst, rsrc, start, end);
+ return;
+ case BFINS_BF_OPCODE_X0:
+ gen_bfins(dc, rdst, rsrc, start, end);
+ return;
+ case MM_BF_OPCODE_X0:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP bf_opcode_x0, [" FMT64X "]\n", bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_imm8_opcode_x0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X0(bundle);
+ uint8_t rdst = get_Dest_X0(bundle);
+ int8_t imm8 = get_Imm8_X0(bundle);
+
+ switch (get_Imm8OpcodeExtension_X0(bundle)) {
+ case ADDI_IMM8_OPCODE_X0:
+ gen_addimm(dc, rdst, rsrc, imm8);
+ return;
+ case ADDXI_IMM8_OPCODE_X0:
+ gen_addximm(dc, rdst, rsrc, imm8);
+ return;
+ case ANDI_IMM8_OPCODE_X0:
+ gen_andi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPEQI_IMM8_OPCODE_X0:
+ gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_EQ, "cmpeqi");
+ return;
+ case CMPLTSI_IMM8_OPCODE_X0:
+ gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_LT, "cmpltsi");
+ return;
+ case CMPLTUI_IMM8_OPCODE_X0:
+ gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_LTU, "cmpltui");
+ return;
+ case ORI_IMM8_OPCODE_X0:
+ gen_ori(dc, rdst, rsrc, imm8);
+ return;
+ case V1CMPEQI_IMM8_OPCODE_X0:
+ gen_v1cmpi(dc, rdst, rsrc, imm8, TCG_COND_EQ, "v1cmpeqi");
+ return;
+ case V1CMPLTSI_IMM8_OPCODE_X0:
+ gen_v1cmpi(dc, rdst, rsrc, imm8, TCG_COND_LT, "v1cmpltsi");
+ return;
+ case V1CMPLTUI_IMM8_OPCODE_X0:
+ gen_v1cmpi(dc, rdst, rsrc, imm8, TCG_COND_LTU, "v1cmpltui");
+ return;
+ case XORI_IMM8_OPCODE_X0:
+ gen_xori(dc, rdst, rsrc, imm8);
+ return;
+ case V1ADDI_IMM8_OPCODE_X0:
+ case V1MAXUI_IMM8_OPCODE_X0:
+ case V1MINUI_IMM8_OPCODE_X0:
+ case V2ADDI_IMM8_OPCODE_X0:
+ case V2CMPEQI_IMM8_OPCODE_X0:
+ case V2CMPLTSI_IMM8_OPCODE_X0:
+ case V2CMPLTUI_IMM8_OPCODE_X0:
+ case V2MAXSI_IMM8_OPCODE_X0:
+ case V2MINSI_IMM8_OPCODE_X0:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP imm8_opcode_x0, [" FMT64X "]\n", bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_u_opcode_ex_x0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X0(bundle);
+ uint8_t rdst = get_Dest_X0(bundle);
+
+ switch (get_UnaryOpcodeExtension_X0(bundle)) {
+ case CNTLZ_UNARY_OPCODE_X0:
+ gen_cntlz(dc, rdst, rsrc);
+ return;
+ case CNTTZ_UNARY_OPCODE_X0:
+ gen_cnttz(dc, rdst, rsrc);
+ return;
+ case FNOP_UNARY_OPCODE_X0:
+ case NOP_UNARY_OPCODE_X0:
+ if (!rsrc && !rdst) {
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "(f)nop\n");
+ return;
+ }
+ /* Fall through */
+ case FSINGLE_PACK1_UNARY_OPCODE_X0:
+ case PCNT_UNARY_OPCODE_X0:
+ case REVBITS_UNARY_OPCODE_X0:
+ case REVBYTES_UNARY_OPCODE_X0:
+ case TBLIDXB0_UNARY_OPCODE_X0:
+ case TBLIDXB1_UNARY_OPCODE_X0:
+ case TBLIDXB2_UNARY_OPCODE_X0:
+ case TBLIDXB3_UNARY_OPCODE_X0:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP decode_u_opcode_ex_x0, [" FMT64X "]\n", bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_rrr_0_opcode_x0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X0(bundle);
+ uint8_t rsrcb = get_SrcB_X0(bundle);
+ uint8_t rdst = get_Dest_X0(bundle);
+
+ switch (get_RRROpcodeExtension_X0(bundle)) {
+ case ADD_RRR_0_OPCODE_X0:
+ gen_add(dc, rdst, rsrc, rsrcb);
+ return;
+ case ADDX_RRR_0_OPCODE_X0:
+ gen_addx(dc, rdst, rsrc, rsrcb);
+ return;
+ case ADDXSC_RRR_0_OPCODE_X0:
+ gen_addxsc(dc, rdst, rsrc, rsrcb);
+ return;
+ case AND_RRR_0_OPCODE_X0:
+ gen_and(dc, rdst, rsrc, rsrcb);
+ return;
+ case CMOVEQZ_RRR_0_OPCODE_X0:
+ gen_cmov(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "cmoveqz");
+ return;
+ case CMOVNEZ_RRR_0_OPCODE_X0:
+ gen_cmov(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "cmovnez");
+ return;
+ case CMPEQ_RRR_0_OPCODE_X0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "cmpeq");
+ return;
+ case CMPLES_RRR_0_OPCODE_X0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE, "cmples");
+ return;
+ case CMPLEU_RRR_0_OPCODE_X0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU, "cmpleu");
+ return;
+ case CMPLTS_RRR_0_OPCODE_X0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT, "cmplts");
+ return;
+ case CMPLTU_RRR_0_OPCODE_X0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU, "cmpltu");
+ return;
+ case CMPNE_RRR_0_OPCODE_X0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "cmpne");
+ return;
+ case DBLALIGN_RRR_0_OPCODE_X0:
+ gen_dblalign(dc, rdst, rsrc, rsrcb);
+ return;
+ case MNZ_RRR_0_OPCODE_X0:
+ gen_menz(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "mnz");
+ return;
+ case MZ_RRR_0_OPCODE_X0:
+ gen_menz(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "mz");
+ return;
+ case MULAX_RRR_0_OPCODE_X0:
+ gen_mulx(dc, rdst, rsrc, rsrcb, true, "mulax");
+ return;
+ case MULX_RRR_0_OPCODE_X0:
+ gen_mulx(dc, rdst, rsrc, rsrcb, false, "mulx");
+ return;
+ case MULA_HS_HS_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, true, true, true, true, "mula_hs_hs");
+ return;
+ case MULA_HS_HU_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, true, true, true, false, "mula_hs_hu");
+ return;
+ case MULA_HS_LS_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, true, true, false, true, "mula_hs_ls");
+ return;
+ case MULA_HS_LU_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, true, true, false, false, "mula_hs_lu");
+ return;
+ case MULA_HU_LS_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, true, false, false, true, "mula_hu_ls");
+ return;
+ case MULA_HU_HU_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, true, false, true, false, "mula_hu_hu");
+ return;
+ case MULA_HU_LU_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, true, false, false, false, "mula_hu_lu");
+ return;
+ case MULA_LS_LS_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, false, true, false, true, "mula_ls_ls");
+ return;
+ case MULA_LS_LU_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, false, true, false, false, "mula_ls_lu");
+ return;
+ case MULA_LU_LU_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ true, false, false, false, false, "mula_lu_lu");
+ return;
+ case MUL_HS_HS_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ false, true, true, true, true, "mul_hs_hs");
+ return;
+ case MUL_HS_HU_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ false, true, true, true, false, "mul_hs_hu");
+ return;
+ case MUL_HS_LS_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ false, true, true, false, true, "mul_hs_ls");
+ return;
+ case MUL_HS_LU_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ false, true, true, false, false, "mul_hs_lu");
+ return;
+ case MUL_HU_LS_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ false, true, false, false, true, "mul_hu_ls");
+ return;
+ case MUL_HU_HU_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ false, true, false, true, false, "mul_hu_hu");
+ return;
+ case MUL_HU_LU_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ false, true, false, false, false, "mul_hu_lu");
+ return;
+ case MUL_LS_LS_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ false, false, true, false, true, "mul_ls_ls");
+ return;
+ case MUL_LS_LU_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ false, false, true, false, false, "mul_ls_lu");
+ return;
+ case MUL_LU_LU_RRR_0_OPCODE_X0:
+ gen_mul(dc, rdst, rsrc, rsrcb,
+ false, false, false, false, false, "mul_lu_lu");
+ return;
+ case NOR_RRR_0_OPCODE_X0:
+ gen_nor(dc, rdst, rsrc, rsrcb);
+ return;
+ case OR_RRR_0_OPCODE_X0:
+ gen_or(dc, rdst, rsrc, rsrcb);
+ return;
+ case ROTL_RRR_0_OPCODE_X0:
+ gen_rotl(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHL_RRR_0_OPCODE_X0:
+ gen_shl(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHL1ADDX_RRR_0_OPCODE_X0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 1, true);
+ return;
+ case SHL1ADD_RRR_0_OPCODE_X0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 1, false);
+ return;
+ case SHL2ADDX_RRR_0_OPCODE_X0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 2, true);
+ return;
+ case SHL2ADD_RRR_0_OPCODE_X0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 2, false);
+ return;
+ case SHL3ADDX_RRR_0_OPCODE_X0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 3, true);
+ return;
+ case SHL3ADD_RRR_0_OPCODE_X0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 3, false);
+ return;
+ case SHLX_RRR_0_OPCODE_X0:
+ gen_shlx(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRS_RRR_0_OPCODE_X0:
+ gen_shrs(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRUX_RRR_0_OPCODE_X0:
+ gen_shrux(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRU_RRR_0_OPCODE_X0:
+ gen_shru(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHUFFLEBYTES_RRR_0_OPCODE_X0:
+ gen_shufflebytes(dc, rdst, rsrc, rsrcb);
+ return;
+ case SUBX_RRR_0_OPCODE_X0:
+ gen_subx(dc, rdst, rsrc, rsrcb);
+ return;
+ case SUB_RRR_0_OPCODE_X0:
+ gen_sub(dc, rdst, rsrc, rsrcb);
+ return;
+ case UNARY_RRR_0_OPCODE_X0:
+ return decode_u_opcode_ex_x0(dc, bundle);
+ case V1INT_L_RRR_0_OPCODE_X0:
+ gen_v1int_l(dc, rdst, rsrc, rsrcb);
+ return;
+ case V4INT_L_RRR_0_OPCODE_X0:
+ gen_v4int_l(dc, rdst, rsrc, rsrcb);
+ return;
+ case V1CMPEQ_RRR_0_OPCODE_X0:
+ gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "v1cmpeq");
+ return;
+ case V1CMPLES_RRR_0_OPCODE_X0:
+ gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE, "v1cmples");
+ return;
+ case V1CMPLEU_RRR_0_OPCODE_X0:
+ gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU, "v1cmpleu");
+ return;
+ case V1CMPLTS_RRR_0_OPCODE_X0:
+ gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT, "v1cmplts");
+ return;
+ case V1CMPLTU_RRR_0_OPCODE_X0:
+ gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU, "v1cmpltu");
+ return;
+ case V1CMPNE_RRR_0_OPCODE_X0:
+ gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "v1cmpne");
+ return;
+ case XOR_RRR_0_OPCODE_X0:
+ gen_xor(dc, rdst, rsrc, rsrcb);
+ return;
+ case CMULAF_RRR_0_OPCODE_X0:
+ case CMULA_RRR_0_OPCODE_X0:
+ case CMULFR_RRR_0_OPCODE_X0:
+ case CMULF_RRR_0_OPCODE_X0:
+ case CMULHR_RRR_0_OPCODE_X0:
+ case CMULH_RRR_0_OPCODE_X0:
+ case CMUL_RRR_0_OPCODE_X0:
+ case CRC32_32_RRR_0_OPCODE_X0:
+ case CRC32_8_RRR_0_OPCODE_X0:
+ case DBLALIGN2_RRR_0_OPCODE_X0:
+ case DBLALIGN4_RRR_0_OPCODE_X0:
+ case DBLALIGN6_RRR_0_OPCODE_X0:
+ case FDOUBLE_ADDSUB_RRR_0_OPCODE_X0:
+ case FDOUBLE_ADD_FLAGS_RRR_0_OPCODE_X0:
+ case FDOUBLE_MUL_FLAGS_RRR_0_OPCODE_X0:
+ case FDOUBLE_PACK1_RRR_0_OPCODE_X0:
+ case FDOUBLE_PACK2_RRR_0_OPCODE_X0:
+ case FDOUBLE_SUB_FLAGS_RRR_0_OPCODE_X0:
+ case FDOUBLE_UNPACK_MAX_RRR_0_OPCODE_X0:
+ case FDOUBLE_UNPACK_MIN_RRR_0_OPCODE_X0:
+ case FSINGLE_ADD1_RRR_0_OPCODE_X0:
+ case FSINGLE_ADDSUB2_RRR_0_OPCODE_X0:
+ case FSINGLE_MUL1_RRR_0_OPCODE_X0:
+ case FSINGLE_MUL2_RRR_0_OPCODE_X0:
+ case FSINGLE_PACK2_RRR_0_OPCODE_X0:
+ case FSINGLE_SUB1_RRR_0_OPCODE_X0:
+ case SUBXSC_RRR_0_OPCODE_X0:
+ case V1ADDUC_RRR_0_OPCODE_X0:
+ case V1ADD_RRR_0_OPCODE_X0:
+ case V1ADIFFU_RRR_0_OPCODE_X0:
+ case V1AVGU_RRR_0_OPCODE_X0:
+ case V1DDOTPUSA_RRR_0_OPCODE_X0:
+ case V1DDOTPUS_RRR_0_OPCODE_X0:
+ case V1DOTPA_RRR_0_OPCODE_X0:
+ case V1DOTPUSA_RRR_0_OPCODE_X0:
+ case V1DOTPUS_RRR_0_OPCODE_X0:
+ case V1DOTP_RRR_0_OPCODE_X0:
+ case V1MAXU_RRR_0_OPCODE_X0:
+ case V1MINU_RRR_0_OPCODE_X0:
+ case V1MNZ_RRR_0_OPCODE_X0:
+ case V1MULTU_RRR_0_OPCODE_X0:
+ case V1MULUS_RRR_0_OPCODE_X0:
+ case V1MULU_RRR_0_OPCODE_X0:
+ case V1MZ_RRR_0_OPCODE_X0:
+ case V1SADAU_RRR_0_OPCODE_X0:
+ case V1SADU_RRR_0_OPCODE_X0:
+ case V1SHL_RRR_0_OPCODE_X0:
+ case V1SHRS_RRR_0_OPCODE_X0:
+ case V1SHRU_RRR_0_OPCODE_X0:
+ case V1SUBUC_RRR_0_OPCODE_X0:
+ case V1SUB_RRR_0_OPCODE_X0:
+ case V1INT_H_RRR_0_OPCODE_X0:
+ case V2INT_H_RRR_0_OPCODE_X0:
+ case V2INT_L_RRR_0_OPCODE_X0:
+ case V4INT_H_RRR_0_OPCODE_X0:
+ case V2ADDSC_RRR_0_OPCODE_X0:
+ case V2ADD_RRR_0_OPCODE_X0:
+ case V2ADIFFS_RRR_0_OPCODE_X0:
+ case V2AVGS_RRR_0_OPCODE_X0:
+ case V2CMPEQ_RRR_0_OPCODE_X0:
+ case V2CMPLES_RRR_0_OPCODE_X0:
+ case V2CMPLEU_RRR_0_OPCODE_X0:
+ case V2CMPLTS_RRR_0_OPCODE_X0:
+ case V2CMPLTU_RRR_0_OPCODE_X0:
+ case V2CMPNE_RRR_0_OPCODE_X0:
+ case V2DOTPA_RRR_0_OPCODE_X0:
+ case V2DOTP_RRR_0_OPCODE_X0:
+ case V2MAXS_RRR_0_OPCODE_X0:
+ case V2MINS_RRR_0_OPCODE_X0:
+ case V2MNZ_RRR_0_OPCODE_X0:
+ case V2MULFSC_RRR_0_OPCODE_X0:
+ case V2MULS_RRR_0_OPCODE_X0:
+ case V2MULTS_RRR_0_OPCODE_X0:
+ case V2MZ_RRR_0_OPCODE_X0:
+ case V2PACKH_RRR_0_OPCODE_X0:
+ case V2PACKL_RRR_0_OPCODE_X0:
+ case V2PACKUC_RRR_0_OPCODE_X0:
+ case V2SADAS_RRR_0_OPCODE_X0:
+ case V2SADAU_RRR_0_OPCODE_X0:
+ case V2SADS_RRR_0_OPCODE_X0:
+ case V2SADU_RRR_0_OPCODE_X0:
+ case V2SHLSC_RRR_0_OPCODE_X0:
+ case V2SHL_RRR_0_OPCODE_X0:
+ case V2SHRS_RRR_0_OPCODE_X0:
+ case V2SHRU_RRR_0_OPCODE_X0:
+ case V2SUBSC_RRR_0_OPCODE_X0:
+ case V2SUB_RRR_0_OPCODE_X0:
+ case V4ADDSC_RRR_0_OPCODE_X0:
+ case V4ADD_RRR_0_OPCODE_X0:
+ case V4PACKSC_RRR_0_OPCODE_X0:
+ case V4SHLSC_RRR_0_OPCODE_X0:
+ case V4SHL_RRR_0_OPCODE_X0:
+ case V4SHRS_RRR_0_OPCODE_X0:
+ case V4SHRU_RRR_0_OPCODE_X0:
+ case V4SUBSC_RRR_0_OPCODE_X0:
+ case V4SUB_RRR_0_OPCODE_X0:
+ case V1DDOTPUA_RRR_0_OPCODE_X0:
+ case V1DDOTPU_RRR_0_OPCODE_X0:
+ case V1DOTPUA_RRR_0_OPCODE_X0:
+ case V1DOTPU_RRR_0_OPCODE_X0:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP rrr_0_opcode_x0, [" FMT64X "]\n", bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_shift_opcode_x0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X0(bundle);
+ uint8_t rdst = get_Dest_X0(bundle);
+ uint8_t shamt = get_ShAmt_X0(bundle);
+
+ switch (get_ShiftOpcodeExtension_X0(bundle)) {
+ case ROTLI_SHIFT_OPCODE_X0:
+ gen_rotli(dc, rdst, rsrc, shamt);
+ return;
+ case SHLI_SHIFT_OPCODE_X0:
+ gen_shli(dc, rdst, rsrc, shamt);
+ return;
+ case SHLXI_SHIFT_OPCODE_X0:
+ gen_shlxi(dc, rdst, rsrc, shamt);
+ return;
+ case SHRSI_SHIFT_OPCODE_X0:
+ gen_shrsi(dc, rdst, rsrc, shamt);
+ return;
+ case SHRUI_SHIFT_OPCODE_X0:
+ gen_shrui(dc, rdst, rsrc, shamt);
+ return;
+ case SHRUXI_SHIFT_OPCODE_X0:
+ gen_shruxi(dc, rdst, rsrc, shamt);
+ return;
+ case V1SHRUI_SHIFT_OPCODE_X0:
+ gen_v1shrui(dc, rdst, rsrc, shamt);
+ return;
+ case V1SHLI_SHIFT_OPCODE_X0:
+ case V1SHRSI_SHIFT_OPCODE_X0:
+ case V2SHLI_SHIFT_OPCODE_X0:
+ case V2SHRSI_SHIFT_OPCODE_X0:
+ case V2SHRUI_SHIFT_OPCODE_X0:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP shift_opcode_x0, [" FMT64X "]\n", bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_branch_opcode_x1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t src = get_SrcA_X1(bundle);
+ int32_t off = sign_extend(get_BrOff_X1(bundle), 17);
+
+ switch (get_BrType_X1(bundle)) {
+ case BEQZT_BRANCH_OPCODE_X1:
+ case BEQZ_BRANCH_OPCODE_X1:
+ gen_b(dc, src, off, TCG_COND_EQ, "beqz(t)");
+ return;
+ case BNEZT_BRANCH_OPCODE_X1:
+ case BNEZ_BRANCH_OPCODE_X1:
+ gen_b(dc, src, off, TCG_COND_NE, "bnez(t)");
+ return;
+ case BLBCT_BRANCH_OPCODE_X1:
+ case BLBC_BRANCH_OPCODE_X1:
+ gen_blb(dc, src, off, TCG_COND_EQ, "blbc(t)");
+ return;
+ case BLBST_BRANCH_OPCODE_X1:
+ case BLBS_BRANCH_OPCODE_X1:
+ gen_blb(dc, src, off, TCG_COND_NE, "blbs(t)");
+ return;
+ case BLEZT_BRANCH_OPCODE_X1:
+ case BLEZ_BRANCH_OPCODE_X1:
+ gen_b(dc, src, off, TCG_COND_LE, "blez(t)");
+ return;
+ case BLTZT_BRANCH_OPCODE_X1:
+ case BLTZ_BRANCH_OPCODE_X1:
+ gen_b(dc, src, off, TCG_COND_LT, "bltz(t)");
+ return;
+ case BGTZT_BRANCH_OPCODE_X1:
+ case BGTZ_BRANCH_OPCODE_X1:
+ gen_b(dc, src, off, TCG_COND_GT, "bgtz(t)");
+ return;
+ case BGEZT_BRANCH_OPCODE_X1:
+ case BGEZ_BRANCH_OPCODE_X1:
+ gen_b(dc, src, off, TCG_COND_GE, "bgez(t)");
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_imm8_opcode_x1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X1(bundle);
+ uint8_t rdst = get_Dest_X1(bundle);
+ int8_t imm8 = get_Imm8_X1(bundle);
+ uint8_t rsrcb = get_SrcB_X1(bundle);
+ int8_t dimm8 = get_Dest_Imm8_X1(bundle);
+
+ switch (get_Imm8OpcodeExtension_X1(bundle)) {
+ case ADDI_IMM8_OPCODE_X1:
+ gen_addimm(dc, rdst, rsrc, imm8);
+ return;
+ case ADDXI_IMM8_OPCODE_X1:
+ gen_addximm(dc, rdst, rsrc, imm8);
+ return;
+ case ANDI_IMM8_OPCODE_X1:
+ gen_andi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPEQI_IMM8_OPCODE_X1:
+ gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_EQ, "cmpeqi");
+ return;
+ case CMPLTSI_IMM8_OPCODE_X1:
+ gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_LT, "cmpltsi");
+ return;
+ case CMPLTUI_IMM8_OPCODE_X1:
+ gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_LTU, "cmpltui");
+ return;
+ case LD1S_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_SB, "ld1s_add");
+ return;
+ case LD1U_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_UB, "ld1u_add");
+ return;
+ case LD2S_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_LESW, "ld2s_add");
+ return;
+ case LD2U_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_LEUW, "ld2u_add");
+ return;
+ case LD4S_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_LESL, "ld4s_add");
+ return;
+ case LD4U_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_LEUL, "ld4u_add");
+ return;
+ case LD_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_LEQ, "ld(na)_add");
+ return;
+ case MFSPR_IMM8_OPCODE_X1:
+ gen_mfspr(dc, rdst, get_MF_Imm14_X1(bundle));
+ return;
+ case MTSPR_IMM8_OPCODE_X1:
+ gen_mtspr(dc, rsrc, get_MT_Imm14_X1(bundle));
+ return;
+ case ORI_IMM8_OPCODE_X1:
+ gen_ori(dc, rdst, rsrc, imm8);
+ return;
+ case ST_ADD_IMM8_OPCODE_X1:
+ gen_st_add(dc, rsrc, rsrcb, dimm8, MO_LEQ, "st_add");
+ return;
+ case ST1_ADD_IMM8_OPCODE_X1:
+ gen_st_add(dc, rsrc, rsrcb, dimm8, MO_UB, "st1_add");
+ return;
+ case ST2_ADD_IMM8_OPCODE_X1:
+ gen_st_add(dc, rsrc, rsrcb, dimm8, MO_LEUW, "st2_add");
+ return;
+ case ST4_ADD_IMM8_OPCODE_X1:
+ gen_st_add(dc, rsrc, rsrcb, dimm8, MO_LEUL, "st4_add");
+ return;
+ case V1CMPEQI_IMM8_OPCODE_X1:
+ gen_v1cmpi(dc, rdst, rsrc, imm8, TCG_COND_EQ, "v1cmpeqi");
+ return;
+ case V1CMPLTSI_IMM8_OPCODE_X1:
+ gen_v1cmpi(dc, rdst, rsrc, imm8, TCG_COND_LT, "v1cmpltsi");
+ return;
+ case V1CMPLTUI_IMM8_OPCODE_X1:
+ gen_v1cmpi(dc, rdst, rsrc, imm8, TCG_COND_LTU, "v1cmpltui");
+ return;
+ case XORI_IMM8_OPCODE_X1:
+ gen_xori(dc, rdst, rsrc, imm8);
+ return;
+ case LDNT1S_ADD_IMM8_OPCODE_X1:
+ case LDNT1U_ADD_IMM8_OPCODE_X1:
+ case LDNT2S_ADD_IMM8_OPCODE_X1:
+ case LDNT2U_ADD_IMM8_OPCODE_X1:
+ case LDNT4S_ADD_IMM8_OPCODE_X1:
+ case LDNT4U_ADD_IMM8_OPCODE_X1:
+ case LDNT_ADD_IMM8_OPCODE_X1:
+ case LWNA_ADD_IMM8_OPCODE_X1:
+ case STNT1_ADD_IMM8_OPCODE_X1:
+ case STNT2_ADD_IMM8_OPCODE_X1:
+ case STNT4_ADD_IMM8_OPCODE_X1:
+ case STNT_ADD_IMM8_OPCODE_X1:
+ case V1ADDI_IMM8_OPCODE_X1:
+ case V1MAXUI_IMM8_OPCODE_X1:
+ case V1MINUI_IMM8_OPCODE_X1:
+ case V2ADDI_IMM8_OPCODE_X1:
+ case V2CMPEQI_IMM8_OPCODE_X1:
+ case V2CMPLTSI_IMM8_OPCODE_X1:
+ case V2CMPLTUI_IMM8_OPCODE_X1:
+ case V2MAXSI_IMM8_OPCODE_X1:
+ case V2MINSI_IMM8_OPCODE_X1:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP imm8_opcode_x1, [" FMT64X "]\n", bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_jump_opcode_x1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ int off = sign_extend(get_JumpOff_X1(bundle), 27);
+
+ switch (get_JumpOpcodeExtension_X1(bundle)) {
+ case JAL_JUMP_OPCODE_X1:
+ gen_jal(dc, off);
+ return;
+ case J_JUMP_OPCODE_X1:
+ gen_j(dc, off);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_u_opcode_ex_x1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X1(bundle);
+ uint8_t rdst = get_Dest_X1(bundle);
+
+ switch (get_UnaryOpcodeExtension_X1(bundle)) {
+ case NOP_UNARY_OPCODE_X1:
+ case FNOP_UNARY_OPCODE_X1:
+ if (!rdst && !rsrc) {
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "(f)nop\n");
+ return;
+ }
+ break;
+ case JALRP_UNARY_OPCODE_X1:
+ case JALR_UNARY_OPCODE_X1:
+ if (!rdst) {
+ gen_jalr(dc, rsrc);
+ return;
+ }
+ break;
+ case JRP_UNARY_OPCODE_X1:
+ case JR_UNARY_OPCODE_X1:
+ if (!rdst) {
+ gen_jr(dc, rsrc);
+ return;
+ }
+ break;
+ case LD1S_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_SB, "ld1s");
+ return;
+ case LD1U_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_UB, "ld1u");
+ return;
+ case LD2S_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_LESW, "ld2s");
+ return;
+ case LD2U_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_LEUW, "ld2u");
+ return;
+ case LD4S_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_LESL, "ld4s");
+ return;
+ case LD4U_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_LEUL, "ld4u");
+ return;
+ case LDNA_UNARY_OPCODE_X1:
+ case LD_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_LEQ, "ld(na)");
+ return;
+ case LNK_UNARY_OPCODE_X1:
+ if (!rsrc) {
+ gen_lnk(dc, rdst);
+ return;
+ }
+ break;
+ case MF_UNARY_OPCODE_X1:
+ if (!rdst && !rsrc) {
+ gen_mf(dc);
+ return;
+ }
+ break;
+ case SWINT1_UNARY_OPCODE_X1:
+ if (!rsrc && !rdst) {
+ gen_swint1(dc);
+ return;
+ }
+ break;
+ case WH64_UNARY_OPCODE_X1:
+ if (!rdst) {
+ gen_wh64(dc, rsrc);
+ return;
+ }
+ break;
+ case DRAIN_UNARY_OPCODE_X1:
+ case DTLBPR_UNARY_OPCODE_X1:
+ case FINV_UNARY_OPCODE_X1:
+ case FLUSHWB_UNARY_OPCODE_X1:
+ case FLUSH_UNARY_OPCODE_X1:
+ case ICOH_UNARY_OPCODE_X1:
+ case ILL_UNARY_OPCODE_X1:
+ case INV_UNARY_OPCODE_X1:
+ case IRET_UNARY_OPCODE_X1:
+ case LDNT1S_UNARY_OPCODE_X1:
+ case LDNT1U_UNARY_OPCODE_X1:
+ case LDNT2S_UNARY_OPCODE_X1:
+ case LDNT2U_UNARY_OPCODE_X1:
+ case LDNT4S_UNARY_OPCODE_X1:
+ case LDNT4U_UNARY_OPCODE_X1:
+ case LDNT_UNARY_OPCODE_X1:
+ case NAP_UNARY_OPCODE_X1:
+ case SWINT0_UNARY_OPCODE_X1:
+ case SWINT2_UNARY_OPCODE_X1:
+ case SWINT3_UNARY_OPCODE_X1:
+ break;
+ default:
+ g_assert_not_reached();
+ }
+
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP decode_u_opcode_ex_x1, [" FMT64X "]\n", bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+}
+
+static void decode_rrr_0_opcode_x1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X1(bundle);
+ uint8_t rsrcb = get_SrcB_X1(bundle);
+ uint8_t rdst = get_Dest_X1(bundle);
+
+ switch (get_RRROpcodeExtension_X1(bundle)) {
+ case ADDX_RRR_0_OPCODE_X1:
+ gen_addx(dc, rdst, rsrc, rsrcb);
+ return;
+ case ADDXSC_RRR_0_OPCODE_X1:
+ gen_addxsc(dc, rdst, rsrc, rsrcb);
+ return;
+ case ADD_RRR_0_OPCODE_X1:
+ gen_add(dc, rdst, rsrc, rsrcb);
+ return;
+ case AND_RRR_0_OPCODE_X1:
+ gen_and(dc, rdst, rsrc, rsrcb);
+ return;
+ case CMPEQ_RRR_0_OPCODE_X1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "cmpeq");
+ return;
+ case CMPLES_RRR_0_OPCODE_X1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE, "cmples");
+ return;
+ case CMPLEU_RRR_0_OPCODE_X1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU, "cmpleu");
+ return;
+ case CMPEXCH4_RRR_0_OPCODE_X1:
+ gen_atomic_excp(dc, rdst, rsrc, rsrcb,
+ TILEGX_EXCP_OPCODE_CMPEXCH4, "cmpexch4");
+ return;
+ case CMPEXCH_RRR_0_OPCODE_X1:
+ gen_atomic_excp(dc, rdst, rsrc, rsrcb,
+ TILEGX_EXCP_OPCODE_CMPEXCH, "cmpexch");
+ return;
+ case CMPLTS_RRR_0_OPCODE_X1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT, "cmplts");
+ return;
+ case CMPLTU_RRR_0_OPCODE_X1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU, "cmpltu");
+ return;
+ case CMPNE_RRR_0_OPCODE_X1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "cmpne");
+ return;
+ case EXCH4_RRR_0_OPCODE_X1:
+ gen_atomic_excp(dc, rdst, rsrc, rsrcb,
+ TILEGX_EXCP_OPCODE_EXCH4, "exch4");
+ return;
+ case EXCH_RRR_0_OPCODE_X1:
+ gen_atomic_excp(dc, rdst, rsrc, rsrcb,
+ TILEGX_EXCP_OPCODE_EXCH, "exch");
+ return;
+ case FETCHADD_RRR_0_OPCODE_X1:
+ gen_atomic_excp(dc, rdst, rsrc, rsrcb,
+ TILEGX_EXCP_OPCODE_FETCHADD, "fetchadd");
+ return;
+ case FETCHADD4_RRR_0_OPCODE_X1:
+ gen_atomic_excp(dc, rdst, rsrc, rsrcb,
+ TILEGX_EXCP_OPCODE_FETCHADD4, "fetchadd4");
+ return;
+ case FETCHADDGEZ_RRR_0_OPCODE_X1:
+ gen_atomic_excp(dc, rdst, rsrc, rsrcb,
+ TILEGX_EXCP_OPCODE_FETCHADDGEZ, "fetchaddgez");
+ return;
+ case FETCHADDGEZ4_RRR_0_OPCODE_X1:
+ gen_atomic_excp(dc, rdst, rsrc, rsrcb,
+ TILEGX_EXCP_OPCODE_FETCHADDGEZ4, "fetchaddgez4");
+ return;
+ case FETCHAND_RRR_0_OPCODE_X1:
+ gen_atomic_excp(dc, rdst, rsrc, rsrcb,
+ TILEGX_EXCP_OPCODE_FETCHAND, "fetchand");
+ return;
+ case FETCHAND4_RRR_0_OPCODE_X1:
+ gen_atomic_excp(dc, rdst, rsrc, rsrcb,
+ TILEGX_EXCP_OPCODE_FETCHAND4, "fetchand4");
+ return;
+ case FETCHOR_RRR_0_OPCODE_X1:
+ gen_atomic_excp(dc, rdst, rsrc, rsrcb,
+ TILEGX_EXCP_OPCODE_FETCHOR, "fetchor");
+ return;
+ case FETCHOR4_RRR_0_OPCODE_X1:
+ gen_atomic_excp(dc, rdst, rsrc, rsrcb,
+ TILEGX_EXCP_OPCODE_FETCHOR4, "fetchor4");
+ return;
+ case MZ_RRR_0_OPCODE_X1:
+ gen_menz(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "mz");
+ return;
+ case MNZ_RRR_0_OPCODE_X1:
+ gen_menz(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "mnz");
+ return;
+ case NOR_RRR_0_OPCODE_X1:
+ gen_nor(dc, rdst, rsrc, rsrcb);
+ return;
+ case OR_RRR_0_OPCODE_X1:
+ gen_or(dc, rdst, rsrc, rsrcb);
+ return;
+ case ROTL_RRR_0_OPCODE_X1:
+ gen_rotl(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHL1ADDX_RRR_0_OPCODE_X1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 1, true);
+ return;
+ case SHL1ADD_RRR_0_OPCODE_X1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 1, false);
+ return;
+ case SHL2ADDX_RRR_0_OPCODE_X1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 2, true);
+ return;
+ case SHL2ADD_RRR_0_OPCODE_X1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 2, false);
+ return;
+ case SHL3ADDX_RRR_0_OPCODE_X1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 3, true);
+ return;
+ case SHL3ADD_RRR_0_OPCODE_X1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 3, false);
+ return;
+ case SHLX_RRR_0_OPCODE_X1:
+ gen_shlx(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHL_RRR_0_OPCODE_X1:
+ gen_shl(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRS_RRR_0_OPCODE_X1:
+ gen_shrs(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRUX_RRR_0_OPCODE_X1:
+ gen_shrux(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRU_RRR_0_OPCODE_X1:
+ gen_shru(dc, rdst, rsrc, rsrcb);
+ return;
+ case SUB_RRR_0_OPCODE_X1:
+ gen_sub(dc, rdst, rsrc, rsrcb);
+ return;
+ case SUBX_RRR_0_OPCODE_X1:
+ gen_subx(dc, rdst, rsrc, rsrcb);
+ return;
+ case ST1_RRR_0_OPCODE_X1:
+ if (!rdst) {
+ gen_st(dc, rsrc, rsrcb, MO_UB, "st1");
+ return;
+ }
+ break;
+ case ST2_RRR_0_OPCODE_X1:
+ if (!rdst) {
+ gen_st(dc, rsrc, rsrcb, MO_LEUW, "st2");
+ return;
+ }
+ break;
+ case ST4_RRR_0_OPCODE_X1:
+ if (!rdst) {
+ gen_st(dc, rsrc, rsrcb, MO_LEUL, "st4");
+ return;
+ }
+ break;
+ case ST_RRR_0_OPCODE_X1:
+ if (!rdst) {
+ gen_st(dc, rsrc, rsrcb, MO_LEQ, "st");
+ return;
+ }
+ break;
+ case UNARY_RRR_0_OPCODE_X1:
+ return decode_u_opcode_ex_x1(dc, bundle);
+ case V1INT_L_RRR_0_OPCODE_X1:
+ gen_v1int_l(dc, rdst, rsrc, rsrcb);
+ return;
+ case V4INT_L_RRR_0_OPCODE_X1:
+ gen_v4int_l(dc, rdst, rsrc, rsrcb);
+ return;
+ case V1CMPEQ_RRR_0_OPCODE_X1:
+ gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "v1cmpeq");
+ return;
+ case V1CMPLES_RRR_0_OPCODE_X1:
+ gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE, "v1cmples");
+ return;
+ case V1CMPLEU_RRR_0_OPCODE_X1:
+ gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU, "v1cmpleu");
+ return;
+ case V1CMPLTS_RRR_0_OPCODE_X1:
+ gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT, "v1cmplts");
+ return;
+ case V1CMPLTU_RRR_0_OPCODE_X1:
+ gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU, "v1cmpltu");
+ return;
+ case V1CMPNE_RRR_0_OPCODE_X1:
+ gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "v1cmpne");
+ return;
+ case XOR_RRR_0_OPCODE_X1:
+ gen_xor(dc, rdst, rsrc, rsrcb);
+ return;
+ case DBLALIGN2_RRR_0_OPCODE_X1:
+ case DBLALIGN4_RRR_0_OPCODE_X1:
+ case DBLALIGN6_RRR_0_OPCODE_X1:
+ case STNT1_RRR_0_OPCODE_X1:
+ case STNT2_RRR_0_OPCODE_X1:
+ case STNT4_RRR_0_OPCODE_X1:
+ case STNT_RRR_0_OPCODE_X1:
+ case SUBXSC_RRR_0_OPCODE_X1:
+ case V1INT_H_RRR_0_OPCODE_X1:
+ case V2INT_H_RRR_0_OPCODE_X1:
+ case V2INT_L_RRR_0_OPCODE_X1:
+ case V4INT_H_RRR_0_OPCODE_X1:
+ case V1ADDUC_RRR_0_OPCODE_X1:
+ case V1ADD_RRR_0_OPCODE_X1:
+ case V1MAXU_RRR_0_OPCODE_X1:
+ case V1MINU_RRR_0_OPCODE_X1:
+ case V1MNZ_RRR_0_OPCODE_X1:
+ case V1MZ_RRR_0_OPCODE_X1:
+ case V1SHL_RRR_0_OPCODE_X1:
+ case V1SHRS_RRR_0_OPCODE_X1:
+ case V1SHRU_RRR_0_OPCODE_X1:
+ case V1SUBUC_RRR_0_OPCODE_X1:
+ case V1SUB_RRR_0_OPCODE_X1:
+ case V2ADDSC_RRR_0_OPCODE_X1:
+ case V2ADD_RRR_0_OPCODE_X1:
+ case V2CMPEQ_RRR_0_OPCODE_X1:
+ case V2CMPLES_RRR_0_OPCODE_X1:
+ case V2CMPLEU_RRR_0_OPCODE_X1:
+ case V2CMPLTS_RRR_0_OPCODE_X1:
+ case V2CMPLTU_RRR_0_OPCODE_X1:
+ case V2CMPNE_RRR_0_OPCODE_X1:
+ case V2MAXS_RRR_0_OPCODE_X1:
+ case V2MINS_RRR_0_OPCODE_X1:
+ case V2MNZ_RRR_0_OPCODE_X1:
+ case V2MZ_RRR_0_OPCODE_X1:
+ case V2PACKH_RRR_0_OPCODE_X1:
+ case V2PACKL_RRR_0_OPCODE_X1:
+ case V2PACKUC_RRR_0_OPCODE_X1:
+ case V2SHLSC_RRR_0_OPCODE_X1:
+ case V2SHL_RRR_0_OPCODE_X1:
+ case V2SHRS_RRR_0_OPCODE_X1:
+ case V2SHRU_RRR_0_OPCODE_X1:
+ case V2SUBSC_RRR_0_OPCODE_X1:
+ case V2SUB_RRR_0_OPCODE_X1:
+ case V4ADDSC_RRR_0_OPCODE_X1:
+ case V4ADD_RRR_0_OPCODE_X1:
+ case V4PACKSC_RRR_0_OPCODE_X1:
+ case V4SHLSC_RRR_0_OPCODE_X1:
+ case V4SHL_RRR_0_OPCODE_X1:
+ case V4SHRS_RRR_0_OPCODE_X1:
+ case V4SHRU_RRR_0_OPCODE_X1:
+ case V4SUBSC_RRR_0_OPCODE_X1:
+ case V4SUB_RRR_0_OPCODE_X1:
+ break;
+ default:
+ g_assert_not_reached();
+ }
+
+ qemu_log_mask(LOG_UNIMP, "UNIMP rrr_0_opcode_x1, [" FMT64X "]\n", bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+}
+
+static void decode_shift_opcode_x1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X1(bundle);
+ uint8_t rdst = get_Dest_X1(bundle);
+ uint8_t shamt = get_ShAmt_X1(bundle);
+
+ switch (get_ShiftOpcodeExtension_X1(bundle)) {
+ case ROTLI_SHIFT_OPCODE_X1:
+ gen_rotli(dc, rdst, rsrc, shamt);
+ return;
+ case SHLI_SHIFT_OPCODE_X1:
+ gen_shli(dc, rdst, rsrc, shamt);
+ return;
+ case SHLXI_SHIFT_OPCODE_X1:
+ gen_shlxi(dc, rdst, rsrc, shamt);
+ return;
+ case SHRSI_SHIFT_OPCODE_X1:
+ gen_shrsi(dc, rdst, rsrc, shamt);
+ return;
+ case SHRUI_SHIFT_OPCODE_X1:
+ gen_shrui(dc, rdst, rsrc, shamt);
+ return;
+ case SHRUXI_SHIFT_OPCODE_X1:
+ gen_shruxi(dc, rdst, rsrc, shamt);
+ return;
+ case V1SHRUI_SHIFT_OPCODE_X1:
+ gen_v1shrui(dc, rdst, rsrc, shamt);
+ return;
+ case V1SHLI_SHIFT_OPCODE_X1:
+ case V1SHRSI_SHIFT_OPCODE_X1:
+ case V2SHLI_SHIFT_OPCODE_X1:
+ case V2SHRSI_SHIFT_OPCODE_X1:
+ case V2SHRUI_SHIFT_OPCODE_X1:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP shift_opcode_x1, [" FMT64X "]\n", bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_y0(struct DisasContext *dc, tilegx_bundle_bits bundle)
+{
+ unsigned int opcode = get_Opcode_Y0(bundle);
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+ int8_t imm8 = get_Imm8_Y0(bundle);
+
+ switch (opcode) {
+ case ADDI_OPCODE_Y0:
+ gen_addimm(dc, rdst, rsrc, imm8);
+ return;
+ case ADDXI_OPCODE_Y0:
+ gen_addximm(dc, rdst, rsrc, imm8);
+ return;
+ case ANDI_OPCODE_Y0:
+ gen_andi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPEQI_OPCODE_Y0:
+ gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_EQ, "cmpeqi");
+ return;
+ case CMPLTSI_OPCODE_Y0:
+ gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_LT, "cmpltsi");
+ return;
+ case RRR_0_OPCODE_Y0:
+ decode_rrr_0_opcode_y0(dc, bundle);
+ return;
+ case RRR_1_OPCODE_Y0:
+ decode_rrr_1_opcode_y0(dc, bundle);
+ return;
+ case RRR_2_OPCODE_Y0:
+ decode_rrr_2_opcode_y0(dc, bundle);
+ return;
+ case RRR_3_OPCODE_Y0:
+ decode_rrr_3_opcode_y0(dc, bundle);
+ return;
+ case RRR_4_OPCODE_Y0:
+ decode_rrr_4_opcode_y0(dc, bundle);
+ return;
+ case RRR_5_OPCODE_Y0:
+ decode_rrr_5_opcode_y0(dc, bundle);
+ return;
+ case RRR_6_OPCODE_Y0:
+ decode_rrr_6_opcode_y0(dc, bundle);
+ return;
+ case RRR_9_OPCODE_Y0:
+ decode_rrr_9_opcode_y0(dc, bundle);
+ return;
+ case SHIFT_OPCODE_Y0:
+ decode_shift_opcode_y0(dc, bundle);
+ return;
+ case RRR_7_OPCODE_Y0:
+ case RRR_8_OPCODE_Y0:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP y0, opcode %d, bundle [" FMT64X "]\n",
+ opcode, bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_y1(struct DisasContext *dc, tilegx_bundle_bits bundle)
+{
+ unsigned int opcode = get_Opcode_Y1(bundle);
+ uint8_t rsrc = get_SrcA_Y1(bundle);
+ uint8_t rdst = get_Dest_Y1(bundle);
+ int8_t imm8 = get_Imm8_Y1(bundle);
+
+ switch (opcode) {
+ case ADDI_OPCODE_Y1:
+ gen_addimm(dc, rdst, rsrc, imm8);
+ return;
+ case ADDXI_OPCODE_Y1:
+ gen_addximm(dc, rdst, rsrc, imm8);
+ return;
+ case ANDI_OPCODE_Y1:
+ gen_andi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPEQI_OPCODE_Y1:
+ gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_EQ, "cmpeqi");
+ return;
+ case CMPLTSI_OPCODE_Y1:
+ gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_LT, "cmpltsi");
+ return;
+ case RRR_0_OPCODE_Y1:
+ decode_rrr_0_opcode_y1(dc, bundle);
+ return;
+ case RRR_1_OPCODE_Y1:
+ decode_rrr_1_opcode_y1(dc, bundle);
+ return;
+ case RRR_2_OPCODE_Y1:
+ decode_rrr_2_opcode_y1(dc, bundle);
+ return;
+ case RRR_3_OPCODE_Y1:
+ decode_rrr_3_opcode_y1(dc, bundle);
+ return;
+ case RRR_5_OPCODE_Y1:
+ decode_rrr_5_opcode_y1(dc, bundle);
+ return;
+ case SHIFT_OPCODE_Y1:
+ decode_shift_opcode_y1(dc, bundle);
+ return;
+ case RRR_4_OPCODE_Y1:
+ case RRR_6_OPCODE_Y1:
+ case RRR_7_OPCODE_Y1:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP y1, opcode %d, bundle [" FMT64X "]\n",
+ opcode, bundle);
+ set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_y2(struct DisasContext *dc, tilegx_bundle_bits bundle)
+{
+ unsigned int opcode = get_Opcode_Y2(bundle);
+
+ switch (opcode) {
+ case 0: /* LD1S_OPCODE_Y2, ST1_OPCODE_Y2 */
+ decode_ldst0_opcode_y2(dc, bundle);
+ return;
+ case 1: /* LD4S_OPCODE_Y2, LD1U_OPCODE_Y2, ST2_OPCODE_Y2 */
+ decode_ldst1_opcode_y2(dc, bundle);
+ return;
+ case 2: /* LD2S_OPCODE_Y2, LD4U_OPCODE_Y2, ST4_OPCODE_Y2 */
+ decode_ldst2_opcode_y2(dc, bundle);
+ return;
+ case 3: /* LD_OPCODE_Y2, ST_OPCODE_Y2, LD2U_OPCODE_Y2 */
+ decode_ldst3_opcode_y2(dc, bundle);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_x0(struct DisasContext *dc, tilegx_bundle_bits bundle)
+{
+ unsigned int opcode = get_Opcode_X0(bundle);
+ uint8_t rsrc = get_SrcA_X0(bundle);
+ uint8_t rdst = get_Dest_X0(bundle);
+ int16_t imm16 = get_Imm16_X0(bundle);
+
+ switch (opcode) {
+ case ADDLI_OPCODE_X0:
+ gen_addimm(dc, rdst, rsrc, imm16);
+ return;
+ case ADDXLI_OPCODE_X0:
+ gen_addximm(dc, rdst, rsrc, imm16);
+ return;
+ case BF_OPCODE_X0:
+ decode_bf_opcode_x0(dc, bundle);
+ return;
+ case IMM8_OPCODE_X0:
+ decode_imm8_opcode_x0(dc, bundle);
+ return;
+ case RRR_0_OPCODE_X0:
+ decode_rrr_0_opcode_x0(dc, bundle);
+ return;
+ case SHIFT_OPCODE_X0:
+ decode_shift_opcode_x0(dc, bundle);
+ return;
+ case SHL16INSLI_OPCODE_X0:
+ gen_shl16insli(dc, rdst, rsrc, (uint16_t)imm16);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void decode_x1(struct DisasContext *dc, tilegx_bundle_bits bundle)
+{
+ unsigned int opcode = get_Opcode_X1(bundle);
+ uint8_t rsrc = (uint8_t)get_SrcA_X1(bundle);
+ uint8_t rdst = (uint8_t)get_Dest_X1(bundle);
+ int16_t imm16 = (int16_t)get_Imm16_X1(bundle);
+
+ switch (opcode) {
+ case ADDLI_OPCODE_X1:
+ gen_addimm(dc, rdst, rsrc, imm16);
+ return;
+ case ADDXLI_OPCODE_X1:
+ gen_addximm(dc, rdst, rsrc, imm16);
+ return;
+ case BRANCH_OPCODE_X1:
+ decode_branch_opcode_x1(dc, bundle);
+ return;
+ case IMM8_OPCODE_X1:
+ decode_imm8_opcode_x1(dc, bundle);
+ return;
+ case JUMP_OPCODE_X1:
+ decode_jump_opcode_x1(dc, bundle);
+ return;
+ case RRR_0_OPCODE_X1:
+ decode_rrr_0_opcode_x1(dc, bundle);
+ return;
+ case SHIFT_OPCODE_X1:
+ decode_shift_opcode_x1(dc, bundle);
+ return;
+ case SHL16INSLI_OPCODE_X1:
+ gen_shl16insli(dc, rdst, rsrc, (uint16_t)imm16);
+ return;
+ default:
+ g_assert_not_reached();
+ }
+}
+
+static void translate_one_bundle(struct DisasContext *dc, uint64_t bundle)
+{
+ int i;
+ TCGv tmp;
+
+ for (i = 0; i < TILEGX_TMP_REGS; i++) {
+ dc->tmp_regs[i].idx = TILEGX_R_NOREG;
+ TCGV_UNUSED_I64(dc->tmp_regs[i].val);
+ }
+ dc->tmp_regcur = dc->tmp_regs;
+
+ if (unlikely(qemu_loglevel_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT))) {
+ tcg_gen_debug_insn_start(dc->pc);
+ }
+
+ if (get_Mode(bundle)) {
+ decode_y0(dc, bundle);
+ decode_y1(dc, bundle);
+ decode_y2(dc, bundle);
+ } else {
+ decode_x0(dc, bundle);
+ decode_x1(dc, bundle);
+ }
+
+ for (i = 0; i < TILEGX_TMP_REGS; i++) {
+ if (dc->tmp_regs[i].idx == TILEGX_R_NOREG) {
+ continue;
+ }
+ if (dc->tmp_regs[i].idx < TILEGX_R_COUNT) {
+ tcg_gen_mov_i64(cpu_regs[dc->tmp_regs[i].idx], dc->tmp_regs[i].val);
+ }
+ tcg_temp_free_i64(dc->tmp_regs[i].val);
+ }
+
+ if (dc->jmp.cond != TCG_COND_NEVER) {
+ if (dc->jmp.cond == TCG_COND_ALWAYS) {
+ tcg_gen_mov_i64(cpu_pc, dc->jmp.dest);
+ } else {
+ tmp = tcg_const_i64(dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
+ tcg_gen_movcond_i64(dc->jmp.cond, cpu_pc,
+ dc->jmp.val1, dc->jmp.val2,
+ dc->jmp.dest, tmp);
+ tcg_temp_free_i64(dc->jmp.val1);
+ tcg_temp_free_i64(dc->jmp.val2);
+ tcg_temp_free_i64(tmp);
+ }
+ tcg_temp_free_i64(dc->jmp.dest);
+ tcg_gen_exit_tb(0);
+ }
+}
+
+static inline void gen_intermediate_code_internal(TileGXCPU *cpu,
+ TranslationBlock *tb,
+ bool search_pc)
+{
+ DisasContext ctx;
+ DisasContext *dc = &ctx;
+
+ CPUTLGState *env = &cpu->env;
+ uint64_t pc_start = tb->pc;
+ uint64_t next_page_start = (pc_start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
+ int j, lj = -1;
+ int num_insns = 0;
+ int max_insns = tb->cflags & CF_COUNT_MASK;
+
+ dc->pc = pc_start;
+ dc->exception = TILEGX_EXCP_NONE;
+ dc->jmp.cond = TCG_COND_NEVER;
+ TCGV_UNUSED_I64(dc->jmp.dest);
+ TCGV_UNUSED_I64(dc->jmp.val1);
+ TCGV_UNUSED_I64(dc->jmp.val2);
+
+ if (!max_insns) {
+ max_insns = CF_COUNT_MASK;
+ }
+ gen_tb_start(tb);
+
+ do {
+ TCGV_UNUSED_I64(dc->zero);
+ if (search_pc) {
+ j = tcg_op_buf_count();
+ if (lj < j) {
+ lj++;
+ while (lj < j) {
+ tcg_ctx.gen_opc_instr_start[lj++] = 0;
+ }
+ }
+ tcg_ctx.gen_opc_pc[lj] = dc->pc;
+ tcg_ctx.gen_opc_instr_start[lj] = 1;
+ tcg_ctx.gen_opc_icount[lj] = num_insns;
+ }
+ translate_one_bundle(dc, cpu_ldq_data(env, dc->pc));
+ num_insns++;
+ dc->pc += TILEGX_BUNDLE_SIZE_IN_BYTES;
+ if (dc->exception != TILEGX_EXCP_NONE) {
+ gen_exception(dc, dc->exception);
+ break;
+ }
+ } while (dc->jmp.cond == TCG_COND_NEVER && dc->pc < next_page_start
+ && num_insns < max_insns && !tcg_op_buf_full());
+
+ if (dc->jmp.cond == TCG_COND_NEVER) {
+ tcg_gen_movi_i64(cpu_pc, dc->pc);
+ tcg_gen_exit_tb(0);
+ }
+
+ gen_tb_end(tb, num_insns);
+ if (search_pc) {
+ j = tcg_op_buf_count();
+ lj++;
+ while (lj <= j) {
+ tcg_ctx.gen_opc_instr_start[lj++] = 0;
+ }
+ } else {
+ tb->size = dc->pc - pc_start;
+ tb->icount = num_insns;
+ }
+
+ return;
+}
+
+void gen_intermediate_code(CPUTLGState *env, struct TranslationBlock *tb)
+{
+ gen_intermediate_code_internal(tilegx_env_get_cpu(env), tb, false);
+}
+
+void gen_intermediate_code_pc(CPUTLGState *env, struct TranslationBlock *tb)
+{
+ gen_intermediate_code_internal(tilegx_env_get_cpu(env), tb, true);
+}
+
+void restore_state_to_opc(CPUTLGState *env, TranslationBlock *tb, int pc_pos)
+{
+ env->pc = tcg_ctx.gen_opc_pc[pc_pos];
+}
+
+void tilegx_tcg_init(void)
+{
+ int i;
+
+ cpu_env = tcg_global_reg_new_ptr(TCG_AREG0, "env");
+ cpu_pc = tcg_global_mem_new_i64(TCG_AREG0, offsetof(CPUTLGState, pc), "pc");
+ for (i = 0; i < TILEGX_R_COUNT; i++) {
+ cpu_regs[i] = tcg_global_mem_new_i64(TCG_AREG0,
+ offsetof(CPUTLGState, regs[i]),
+ reg_names[i]);
+ }
+ for (i = 0; i < TILEGX_SPR_COUNT; i++) {
+ cpu_spregs[i] = tcg_global_mem_new_i64(TCG_AREG0,
+ offsetof(CPUTLGState, spregs[i]),
+ spreg_names[i]);
+ }
+#if defined(CONFIG_USER_ONLY)
+ cpu_excparam = tcg_global_mem_new_i32(TCG_AREG0,
+ offsetof(CPUTLGState, excparam),
+ "cpu_excparam");
+#endif
+}
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Qemu-devel] [PATCH 10/10 v12] target-tilegx: Add TILE-Gx building files
2015-06-13 13:07 [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (8 preceding siblings ...)
2015-06-13 13:21 ` [Qemu-devel] [PATCH 09/10 v12] target-tilegx: Generate tcg instructions to finish "Hello world" Chen Gang
@ 2015-06-13 13:22 ` Chen Gang
2015-06-18 22:02 ` [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Peter Maydell
10 siblings, 0 replies; 21+ messages in thread
From: Chen Gang @ 2015-06-13 13:22 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
Add related configuration, make files for tilegx. Now, qemu tilegx can
pass building, and finish running "Hello world" static/shared elf64
binary.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
configure | 2 ++
default-configs/tilegx-linux-user.mak | 1 +
target-tilegx/Makefile.objs | 1 +
3 files changed, 4 insertions(+)
create mode 100644 default-configs/tilegx-linux-user.mak
create mode 100644 target-tilegx/Makefile.objs
diff --git a/configure b/configure
index 409edf9..befd461 100755
--- a/configure
+++ b/configure
@@ -5296,6 +5296,8 @@ case "$target_name" in
s390x)
gdb_xml_files="s390x-core64.xml s390-acr.xml s390-fpr.xml s390-vx.xml"
;;
+ tilegx)
+ ;;
tricore)
;;
unicore32)
diff --git a/default-configs/tilegx-linux-user.mak b/default-configs/tilegx-linux-user.mak
new file mode 100644
index 0000000..3e47493
--- /dev/null
+++ b/default-configs/tilegx-linux-user.mak
@@ -0,0 +1 @@
+# Default configuration for tilegx-linux-user
diff --git a/target-tilegx/Makefile.objs b/target-tilegx/Makefile.objs
new file mode 100644
index 0000000..8b3dc76
--- /dev/null
+++ b/target-tilegx/Makefile.objs
@@ -0,0 +1 @@
+obj-y += cpu.o translate.o helper.o
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user
2015-06-13 13:07 [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (9 preceding siblings ...)
2015-06-13 13:22 ` [Qemu-devel] [PATCH 10/10 v12] target-tilegx: Add TILE-Gx building files Chen Gang
@ 2015-06-18 22:02 ` Peter Maydell
2015-06-19 1:12 ` Chen Gang
10 siblings, 1 reply; 21+ messages in thread
From: Peter Maydell @ 2015-06-18 22:02 UTC (permalink / raw)
To: Chen Gang
Cc: Riku Voipio, qemu-devel, Chris Metcalf, walt@tilera.com,
Andreas Färber, rth@twiddle.net
On 13 June 2015 at 14:07, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
> It can finish running "Hello world" elf64 binary, and the related test
> cases:
>
> - with "--enable-debug", enable assertion with "-g":
>
> ./tilegx-linux-user/qemu-tilegx -L /upstream/release-tile /upstream/release-tile/test/test_shared
> ./tilegx-linux-user/qemu-tilegx -d all -L /upstream/release-tile /upstream/release-tile/test/test_shared > /tmp/a.log
>
> ./tilegx-linux-user/qemu-tilegx /upstream/release-tile/test/test_static
> ./tilegx-linux-user/qemu-tilegx -d all /upstream/release-tile/test/test_static > /tmp/b.log
>
> - without "--enable-debug", disable assertion with "-O2 -g":
>
> ./tilegx-linux-user/qemu-tilegx -L /upstream/release-tile /upstream/release-tile/test/test_shared
> ./tilegx-linux-user/qemu-tilegx -d all -L /upstream/release-tile /upstream/release-tile/test/test_shared > /tmp/c.log
>
> ./tilegx-linux-user/qemu-tilegx /upstream/release-tile/test/test_static
> ./tilegx-linux-user/qemu-tilegx -d all /upstream/release-tile/test/test_static > /tmp/d.log
>
> Chen Gang (10):
> linux-user: tilegx: Firstly add architecture related features
> linux-user: Support tilegx architecture in linux-user
> linux-user/syscall.c: conditionalize syscalls which are not defined in
> tilegx
> target-tilegx: Add opcode basic implementation from Tilera Corporation
> target-tilegx/opcode_tilegx.h: Modify it to fit QEMU usage
> target-tilegx: Add special register information from Tilera
> Corporation
> target-tilegx: Add cpu basic features for linux-user
> target-tilegx: Add several helpers for instructions translation
> target-tilegx: Generate tcg instructions to finish "Hello world"
> target-tilegx: Add TILE-Gx building files
I gave some of these my reviewed-by: tag in v11. Please don't
just drop that, it wastes my time when I end up re-looking
at patches I've already reviewed.
Anyway, you can add my Reviewed-by: tag to patches 1-7 and 10.
I'll let rth do patches 8 and 9.
Opinions on whether we should put this series into master now
(assuming 8 and 9 are good), or delay until after 2.4 release?
thanks
-- PMM
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user
2015-06-18 22:02 ` [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Peter Maydell
@ 2015-06-19 1:12 ` Chen Gang
2015-07-01 1:06 ` Chen Gang
0 siblings, 1 reply; 21+ messages in thread
From: Chen Gang @ 2015-06-19 1:12 UTC (permalink / raw)
To: Peter Maydell
Cc: Riku Voipio, qemu-devel, Chris Metcalf, walt@tilera.com,
Andreas Färber, rth@twiddle.net
On 06/19/2015 06:02 AM, Peter Maydell wrote:
> On 13 June 2015 at 14:07, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
>> It can finish running "Hello world" elf64 binary, and the related test
>> cases:
>>
>> - with "--enable-debug", enable assertion with "-g":
>>
>> ./tilegx-linux-user/qemu-tilegx -L /upstream/release-tile /upstream/release-tile/test/test_shared
>> ./tilegx-linux-user/qemu-tilegx -d all -L /upstream/release-tile /upstream/release-tile/test/test_shared > /tmp/a.log
>>
>> ./tilegx-linux-user/qemu-tilegx /upstream/release-tile/test/test_static
>> ./tilegx-linux-user/qemu-tilegx -d all /upstream/release-tile/test/test_static > /tmp/b.log
>>
>> - without "--enable-debug", disable assertion with "-O2 -g":
>>
>> ./tilegx-linux-user/qemu-tilegx -L /upstream/release-tile /upstream/release-tile/test/test_shared
>> ./tilegx-linux-user/qemu-tilegx -d all -L /upstream/release-tile /upstream/release-tile/test/test_shared > /tmp/c.log
>>
>> ./tilegx-linux-user/qemu-tilegx /upstream/release-tile/test/test_static
>> ./tilegx-linux-user/qemu-tilegx -d all /upstream/release-tile/test/test_static > /tmp/d.log
>>
>> Chen Gang (10):
>> linux-user: tilegx: Firstly add architecture related features
>> linux-user: Support tilegx architecture in linux-user
>> linux-user/syscall.c: conditionalize syscalls which are not defined in
>> tilegx
>> target-tilegx: Add opcode basic implementation from Tilera Corporation
>> target-tilegx/opcode_tilegx.h: Modify it to fit QEMU usage
>> target-tilegx: Add special register information from Tilera
>> Corporation
>> target-tilegx: Add cpu basic features for linux-user
>> target-tilegx: Add several helpers for instructions translation
>> target-tilegx: Generate tcg instructions to finish "Hello world"
>> target-tilegx: Add TILE-Gx building files
>
> I gave some of these my reviewed-by: tag in v11. Please don't
> just drop that, it wastes my time when I end up re-looking
> at patches I've already reviewed.
>
OK, thanks. I shall notice next time (Add reviewer's Reviewed-by in the
already reviewed patches in the re-send patches).
> Anyway, you can add my Reviewed-by: tag to patches 1-7 and 10.
> I'll let rth do patches 8 and 9.
>
OK, thanks.
And excuse me, I am not quite familiar with the related working flow:
- Shall I apply these patches (with the Reviewed-by tags)? or anyone
else help to do it?
- If I shall be the maintainer of tilegx, what shall I do next (e.g.
add an item to MAINTAINER, the working flow as an maintainer, next)
For me, next (after current patches are applied):
- Give a common test (may send bug fix patches):
Let busybox work (e.g. sh, ls, cp, mv, vi)
Finish DejaGNU gcc testsuite for tilegx (which is my original goal).
- Finish all instructions of tilegx (then send the new patches).
- Try qemu system mode for tilegx (hope I can finish within this year).
> Opinions on whether we should put this series into master now
> (assuming 8 and 9 are good), or delay until after 2.4 release?
>
OK, thanks. And welcome any other members' ideas, suggestions and
completions.
Thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user
2015-06-19 1:12 ` Chen Gang
@ 2015-07-01 1:06 ` Chen Gang
0 siblings, 0 replies; 21+ messages in thread
From: Chen Gang @ 2015-07-01 1:06 UTC (permalink / raw)
To: Peter Maydell
Cc: Riku Voipio, qemu-devel, Chris Metcalf, walt@tilera.com,
Andreas Färber, rth@twiddle.net
Today, I shall continue to try the test for qemu linux-user, and try to
finish gcc testsuite within this month.
Welcome any ideas, suggestions, and completions.
Thanks.
On 06/19/2015 09:12 AM, Chen Gang wrote:
> On 06/19/2015 06:02 AM, Peter Maydell wrote:
>> On 13 June 2015 at 14:07, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
>>> It can finish running "Hello world" elf64 binary, and the related test
>>> cases:
>>>
>>> - with "--enable-debug", enable assertion with "-g":
>>>
>>> ./tilegx-linux-user/qemu-tilegx -L /upstream/release-tile /upstream/release-tile/test/test_shared
>>> ./tilegx-linux-user/qemu-tilegx -d all -L /upstream/release-tile /upstream/release-tile/test/test_shared > /tmp/a.log
>>>
>>> ./tilegx-linux-user/qemu-tilegx /upstream/release-tile/test/test_static
>>> ./tilegx-linux-user/qemu-tilegx -d all /upstream/release-tile/test/test_static > /tmp/b.log
>>>
>>> - without "--enable-debug", disable assertion with "-O2 -g":
>>>
>>> ./tilegx-linux-user/qemu-tilegx -L /upstream/release-tile /upstream/release-tile/test/test_shared
>>> ./tilegx-linux-user/qemu-tilegx -d all -L /upstream/release-tile /upstream/release-tile/test/test_shared > /tmp/c.log
>>>
>>> ./tilegx-linux-user/qemu-tilegx /upstream/release-tile/test/test_static
>>> ./tilegx-linux-user/qemu-tilegx -d all /upstream/release-tile/test/test_static > /tmp/d.log
>>>
>>> Chen Gang (10):
>>> linux-user: tilegx: Firstly add architecture related features
>>> linux-user: Support tilegx architecture in linux-user
>>> linux-user/syscall.c: conditionalize syscalls which are not defined in
>>> tilegx
>>> target-tilegx: Add opcode basic implementation from Tilera Corporation
>>> target-tilegx/opcode_tilegx.h: Modify it to fit QEMU usage
>>> target-tilegx: Add special register information from Tilera
>>> Corporation
>>> target-tilegx: Add cpu basic features for linux-user
>>> target-tilegx: Add several helpers for instructions translation
>>> target-tilegx: Generate tcg instructions to finish "Hello world"
>>> target-tilegx: Add TILE-Gx building files
>>
>> I gave some of these my reviewed-by: tag in v11. Please don't
>> just drop that, it wastes my time when I end up re-looking
>> at patches I've already reviewed.
>>
>
> OK, thanks. I shall notice next time (Add reviewer's Reviewed-by in the
> already reviewed patches in the re-send patches).
>
>> Anyway, you can add my Reviewed-by: tag to patches 1-7 and 10.
>> I'll let rth do patches 8 and 9.
>>
>
> OK, thanks.
>
> And excuse me, I am not quite familiar with the related working flow:
>
> - Shall I apply these patches (with the Reviewed-by tags)? or anyone
> else help to do it?
>
> - If I shall be the maintainer of tilegx, what shall I do next (e.g.
> add an item to MAINTAINER, the working flow as an maintainer, next)
>
> For me, next (after current patches are applied):
>
> - Give a common test (may send bug fix patches):
>
> Let busybox work (e.g. sh, ls, cp, mv, vi)
> Finish DejaGNU gcc testsuite for tilegx (which is my original goal).
>
> - Finish all instructions of tilegx (then send the new patches).
>
> - Try qemu system mode for tilegx (hope I can finish within this year).
>
>
>> Opinions on whether we should put this series into master now
>> (assuming 8 and 9 are good), or delay until after 2.4 release?
>>
>
> OK, thanks. And welcome any other members' ideas, suggestions and
> completions.
>
>
> Thanks.
>
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Qemu-devel] [PATCH 08/10 v12] target-tilegx: Add several helpers for instructions translation
[not found] ` <55A76DE6.4070103@hotmail.com>
@ 2015-07-16 8:42 ` gchen gchen
0 siblings, 0 replies; 21+ messages in thread
From: gchen gchen @ 2015-07-16 8:42 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
Hello Maintainers:<br><br>Please help review this patch when you have time.<br><br>Thanks.<br><br>On 06/13/2015 09:19 PM, Chen Gang wrote:<br>> The related instructions are exception, cntlz, cnttz, shufflebytes, and<br>> add_saturate.<br>><br>> Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com><br>> ---<br>> target-tilegx/helper.c | 83 ++++++++++++++++++++++++++++++++++++++++++++++++++<br>> target-tilegx/helper.h | 5 +++<br>> 2 files changed, 88 insertions(+)<br>> create mode 100644 target-tilegx/helper.c<br>> create mode 100644 target-tilegx/helper.h<br>><br>> diff --git a/target-tilegx/helper.c b/target-tilegx/helper.c<br>> new file mode 100644<br>> index 0000000..5ab41cd<br>> --- /dev/null<br>> +++ b/target-tilegx/helper.c<br>> @@ -0,0 +1,83 @@<br>> +/*<br>> + * QEMU TILE-Gx helpers<br>> + *<br>> + * Copyright (c) 2015 Chen Gang<br>> + *<br>> + * This library is free software; you can redistribute it and/or<br>> + * modify it under the terms of the GNU Lesser General Public<br>> + * License as published by the Free Software Foundation; either<br>> + * version 2.1 of the License, or (at your option) any later version.<br>> + *<br>> + * This library is distributed in the hope that it will be useful,<br>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of<br>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU<br>> + * Lesser General Public License for more details.<br>> + *<br>> + * You should have received a copy of the GNU Lesser General Public<br>> + * License along with this library; if not, see<br>> + * <http://www.gnu.org/licenses/lgpl-2.1.html><br>> + */<br>> +<br>> +#include "cpu.h"<br>> +#include "qemu-common.h"<br>> +#include "exec/helper-proto.h"<br>> +<br>> +#define SIGNBIT32 0x80000000<br>> +<br>> +int64_t helper_add_saturate(CPUTLGState *env, uint64_t rsrc, uint64_t rsrcb)<br>> +{<br>> + uint32_t rdst = rsrc + rsrcb;<br>> +<br>> + if (((rdst ^ rsrc) & SIGNBIT32) && !((rsrc ^ rsrcb) & SIGNBIT32)) {<br>> + rdst = ~(((int32_t)rsrc >> 31) ^ SIGNBIT32);<br>> + }<br>> +<br>> + return (int64_t)rdst;<br>> +}<br>> +<br>> +void helper_exception(CPUTLGState *env, uint32_t excp)<br>> +{<br>> + CPUState *cs = CPU(tilegx_env_get_cpu(env));<br>> +<br>> + cs->exception_index = excp;<br>> + cpu_loop_exit(cs);<br>> +}<br>> +<br>> +uint64_t helper_cntlz(uint64_t arg)<br>> +{<br>> + return clz64(arg);<br>> +}<br>> +<br>> +uint64_t helper_cnttz(uint64_t arg)<br>> +{<br>> + return ctz64(arg);<br>> +}<br>> +<br>> +/*<br>> + * Functional Description<br>> + * uint64_t a = rf[SrcA];<br>> + * uint64_t b = rf[SrcB];<br>> + * uint64_t d = rf[Dest];<br>> + * uint64_t output = 0;<br>> + * unsigned int counter;<br>> + * for (counter = 0; counter < (WORD_SIZE / BYTE_SIZE); counter++)<br>> + * {<br>> + * int sel = getByte (b, counter) & 0xf;<br>> + * uint8_t byte = (sel < 8) ? getByte (d, sel) : getByte (a, (sel - 8));<br>> + * output = setByte (output, counter, byte);<br>> + * }<br>> + * rf[Dest] = output;<br>> + */<br>> +uint64_t helper_shufflebytes(uint64_t rdst, uint64_t rsrc, uint64_t rsrcb)<br>> +{<br>> + uint64_t vdst = 0;<br>> + int count;<br>> +<br>> + for (count = 0; count < 64; count += 8) {<br>> + uint64_t sel = rsrcb >> count;<br>> + uint64_t src = (sel & 8) ? rsrc : rdst;<br>> + vdst |= ((src >> ((sel & 7) * 8)) & 0xff) << count;<br>> + }<br>> +<br>> + return vdst;<br>> +}<br>> diff --git a/target-tilegx/helper.h b/target-tilegx/helper.h<br>> new file mode 100644<br>> index 0000000..1411c19<br>> --- /dev/null<br>> +++ b/target-tilegx/helper.h<br>> @@ -0,0 +1,5 @@<br>> +DEF_HELPER_2(exception, noreturn, env, i32)<br>> +DEF_HELPER_FLAGS_1(cntlz, TCG_CALL_NO_RWG_SE, i64, i64)<br>> +DEF_HELPER_FLAGS_1(cnttz, TCG_CALL_NO_RWG_SE, i64, i64)<br>> +DEF_HELPER_FLAGS_3(shufflebytes, TCG_CALL_NO_RWG_SE, i64, i64, i64, i64)<br>> +DEF_HELPER_3(add_saturate, s64, env, i64, i64)<br>><br><br>--<br>Chen Gang<br><br>Open, share, and attitude like air, water, and life which God blessed<br>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Qemu-devel] [PATCH 09/10 v12] target-tilegx: Generate tcg instructions to finish "Hello world"
[not found] ` <55A76DB1.4090302@hotmail.com>
@ 2015-07-16 8:43 ` gchen gchen
0 siblings, 0 replies; 21+ messages in thread
From: gchen gchen @ 2015-07-16 8:43 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
Hello Maintainers:
Please help review this patch when you have time.
Thanks.
On 06/13/2015 09:21 PM, Chen Gang wrote:
> Generate related tcg instructions, and qemu tilegx can finish running
> "Hello world". The elf64 binary can be static or shared.
>
> Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
> ---
> target-tilegx/translate.c | 2966 +++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 2966 insertions(+)
> create mode 100644 target-tilegx/translate.c
>
> diff --git a/target-tilegx/translate.c b/target-tilegx/translate.c
> new file mode 100644
> index 0000000..1dd3a43
> --- /dev/null
> +++ b/target-tilegx/translate.c
> @@ -0,0 +1,2966 @@
> +/*
> + * QEMU TILE-Gx CPU
> + *
> + * Copyright (c) 2015 Chen Gang
> + *
> + * This library is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * This library is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; if not, see
> + * <http://www.gnu.org/licenses/lgpl-2.1.html>
> + */
> +
> +#include "cpu.h"
> +#include "qemu/log.h"
> +#include "disas/disas.h"
> +#include "tcg-op.h"
> +#include "exec/cpu_ldst.h"
> +#include "opcode_tilegx.h"
> +#include "spr_def_64.h"
> +
> +#define FMT64X "%016" PRIx64
> +#define TILEGX_TMP_REGS (TILEGX_MAX_INSTRUCTIONS_PER_BUNDLE + 1)
> +
> +static TCGv_ptr cpu_env;
> +static TCGv cpu_pc;
> +static TCGv cpu_regs[TILEGX_R_COUNT];
> +static TCGv cpu_spregs[TILEGX_SPR_COUNT];
> +#if defined(CONFIG_USER_ONLY)
> +static TCGv_i32 cpu_excparam;
> +#endif
> +
> +static const char * const reg_names[] = {
> + "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
> + "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15",
> + "r16", "r17", "r18", "r19", "r20", "r21", "r22", "r23",
> + "r24", "r25", "r26", "r27", "r28", "r29", "r30", "r31",
> + "r32", "r33", "r34", "r35", "r36", "r37", "r38", "r39",
> + "r40", "r41", "r42", "r43", "r44", "r45", "r46", "r47",
> + "r48", "r49", "r50", "r51", "bp", "tp", "sp", "lr"
> +};
> +
> +static const char * const spreg_names[] = {
> + "cmpexch", "criticalsec", "simcontrol"
> +};
> +
> +/* It is for temporary registers */
> +typedef struct DisasContextTemp {
> + uint8_t idx; /* index */
> + TCGv val; /* value */
> +} DisasContextTemp;
> +
> +/* This is the state at translation time. */
> +typedef struct DisasContext {
> + uint64_t pc; /* Current pc */
> + int exception; /* Current exception */
> +
> + TCGv zero; /* For zero register */
> +
> + DisasContextTemp *tmp_regcur; /* Current temporary registers */
> + DisasContextTemp tmp_regs[TILEGX_TMP_REGS]; /* All temporary registers */
> + struct {
> + TCGCond cond; /* Branch condition */
> + TCGv dest; /* pc jump destination, if will jump */
> + TCGv val1; /* Firt value for condition comparing */
> + TCGv val2; /* Second value for condition comparing */
> + } jmp; /* Jump object, only once in each TB block */
> +} DisasContext;
> +
> +#include "exec/gen-icount.h"
> +
> +static void gen_exception(DisasContext *dc, int num)
> +{
> + TCGv_i32 tmp = tcg_const_i32(num);
> +
> + gen_helper_exception(cpu_env, tmp);
> + tcg_temp_free_i32(tmp);
> +}
> +
> +/*
> + * All exceptions which can still let working flow continue are all in pipe x1,
> + * which is the last pipe of a bundle. So it is OK to only process the first
> + * exception within a bundle.
> + */
> +static void set_exception(DisasContext *dc, int num)
> +{
> + if (dc->exception == TILEGX_EXCP_NONE) {
> + dc->exception = num;
> + }
> +}
> +
> +static bool check_gr(DisasContext *dc, uint8_t reg)
> +{
> + if (likely(reg < TILEGX_R_COUNT)) {
> + return true;
> + }
> +
> + switch (reg) {
> + case TILEGX_R_SN:
> + case TILEGX_R_ZERO:
> + break;
> + case TILEGX_R_IDN0:
> + case TILEGX_R_IDN1:
> + set_exception(dc, TILEGX_EXCP_REG_IDN_ACCESS);
> + break;
> + case TILEGX_R_UDN0:
> + case TILEGX_R_UDN1:
> + case TILEGX_R_UDN2:
> + case TILEGX_R_UDN3:
> + set_exception(dc, TILEGX_EXCP_REG_UDN_ACCESS);
> + break;
> + default:
> + g_assert_not_reached();
> + }
> + return false;
> +}
> +
> +static TCGv load_zero(DisasContext *dc)
> +{
> + if (TCGV_IS_UNUSED_I64(dc->zero)) {
> + dc->zero = tcg_const_i64(0);
> + }
> + return dc->zero;
> +}
> +
> +static TCGv load_gr(DisasContext *dc, uint8_t reg)
> +{
> + if (check_gr(dc, reg)) {
> + return cpu_regs[reg];
> + }
> + return load_zero(dc);
> +}
> +
> +static TCGv dest_gr(DisasContext *dc, uint8_t rdst)
> +{
> + DisasContextTemp *tmp = dc->tmp_regcur++;
> +
> + /* Skip the result, mark the exception if necessary, and continue */
> + check_gr(dc, rdst);
> + assert((dc->tmp_regcur - dc->tmp_regs) < TILEGX_TMP_REGS);
> + tmp->idx = rdst;
> + tmp->val = tcg_temp_new_i64();
> + return tmp->val;
> +}
> +
> +static void gen_atomic_excp(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
> + int excp, const char *code)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
> + code, rdst, rsrc, rsrcb);
> +#if defined(CONFIG_USER_ONLY)
> + tcg_gen_movi_i32(cpu_excparam, (rdst << 16) | (rsrc << 8) | rsrcb);
> + tcg_gen_movi_i64(cpu_pc, dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
> + set_exception(dc, excp);
> +#else
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> +#endif
> +}
> +
> +static void gen_swint1(struct DisasContext *dc)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "swint1\n");
> +
> + tcg_gen_movi_i64(cpu_pc, dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
> + set_exception(dc, TILEGX_EXCP_SYSCALL);
> +}
> +
> +/*
> + * Many SPR reads/writes have side effects and cannot be buffered. However, they
> + * are all in the X1 pipe, which we are excuting last, therefore we need not do
> + * additional buffering.
> + */
> +
> +static void gen_mfspr(struct DisasContext *dc, uint8_t rdst, uint16_t imm14)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "mfspr r%d, 0x%x\n", rdst, imm14);
> +
> + if (!check_gr(dc, rdst)) {
> + return;
> + }
> +
> + switch (imm14) {
> + case SPR_CMPEXCH_VALUE:
> + tcg_gen_mov_i64(cpu_regs[rdst], cpu_spregs[TILEGX_SPR_CMPEXCH]);
> + return;
> + case SPR_INTERRUPT_CRITICAL_SECTION:
> + tcg_gen_mov_i64(cpu_regs[rdst], cpu_spregs[TILEGX_SPR_CRITICAL_SEC]);
> + return;
> + case SPR_SIM_CONTROL:
> + tcg_gen_mov_i64(cpu_regs[rdst], cpu_spregs[TILEGX_SPR_SIM_CONTROL]);
> + return;
> + default:
> + qemu_log_mask(LOG_UNIMP, "UNIMP mfspr 0x%x.\n", imm14);
> + }
> +
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> +}
> +
> +static void gen_mtspr(struct DisasContext *dc, uint8_t rsrc, uint16_t imm14)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "mtspr 0x%x, r%d\n", imm14, rsrc);
> +
> + switch (imm14) {
> + case SPR_CMPEXCH_VALUE:
> + tcg_gen_mov_i64(cpu_spregs[TILEGX_SPR_CMPEXCH], load_gr(dc, rsrc));
> + return;
> + case SPR_INTERRUPT_CRITICAL_SECTION:
> + tcg_gen_mov_i64(cpu_spregs[TILEGX_SPR_CRITICAL_SEC], load_gr(dc, rsrc));
> + return;
> + case SPR_SIM_CONTROL:
> + tcg_gen_mov_i64(cpu_spregs[TILEGX_SPR_SIM_CONTROL], load_gr(dc, rsrc));
> + return;
> + default:
> + qemu_log_mask(LOG_UNIMP, "UNIMP mtspr 0x%x.\n", imm14);
> + }
> +
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> +}
> +
> +static void extract_v1(TCGv out, TCGv in, unsigned byte)
> +{
> + tcg_gen_shri_i64(out, in, byte * 8);
> + tcg_gen_ext8u_i64(out, out);
> +}
> +
> +static void insert_v1(TCGv out, TCGv in, unsigned byte)
> +{
> + tcg_gen_deposit_i64(out, out, in, byte * 8, 8);
> +}
> +
> +static void gen_v1cmpi(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, int8_t imm8,
> + TCGCond cond, const char *code)
> +{
> + int count;
> + TCGv vdst = dest_gr(dc, rdst);
> + TCGv vsrc = load_gr(dc, rsrc);
> + TCGv tmp = tcg_temp_new_i64();
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, %d\n",
> + code, rdst, rsrc, imm8);
> +
> + tcg_gen_movi_i64(vdst, 0);
> + for (count = 0; count < 8; count++) {
> + extract_v1(tmp, vsrc, count);
> + tcg_gen_setcondi_i64(cond, tmp, tmp, imm8);
> + insert_v1(vdst, tmp, count);
> + }
> + tcg_temp_free_i64(tmp);
> +}
> +
> +static void gen_v1cmp(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
> + TCGCond cond, const char *code)
> +{
> + int count;
> + TCGv vdst = dest_gr(dc, rdst);
> + TCGv vsrc = load_gr(dc, rsrc);
> + TCGv vsrcb = load_gr(dc, rsrcb);
> + TCGv tmp = tcg_temp_new_i64();
> + TCGv tmp2 = tcg_temp_new_i64();
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
> + code, rdst, rsrc, rsrcb);
> +
> + tcg_gen_movi_i64(vdst, 0);
> + for (count = 0; count < 8; count++) {
> + extract_v1(tmp, vsrc, count);
> + extract_v1(tmp2, vsrcb, count);
> + tcg_gen_setcond_i64(cond, tmp, tmp, tmp2);
> + insert_v1(vdst, tmp, count);
> + }
> + tcg_temp_free_i64(tmp2);
> + tcg_temp_free_i64(tmp);
> +}
> +
> +static void gen_v1shrui(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t shamt)
> +{
> + int count;
> + TCGv vdst = dest_gr(dc, rdst);
> + TCGv vsrc = load_gr(dc, rsrc);
> + TCGv tmp = tcg_temp_new_i64();
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "v1shrui r%d, r%d, %u\n",
> + rdst, rsrc, shamt);
> +
> + shamt &= 7;
> + tcg_gen_movi_i64(vdst, 0);
> + for (count = 0; count < 8; count++) {
> + extract_v1(tmp, vsrc, count);
> + tcg_gen_shri_i64(tmp, tmp, shamt);
> + insert_v1(vdst, tmp, count);
> + }
> + tcg_temp_free_i64(tmp);
> +}
> +
> +/*
> + * Description
> + *
> + * Interleave the four low-order bytes of the first operand with the four
> + * low-order bytes of the second operand. The low-order byte of the result will
> + * be the low-order byte of the second operand. For example if the first operand
> + * contains the packed bytes {A7,A6,A5,A4,A3,A2,A1,A0} and the second operand
> + * contains the packed bytes {B7,B6,B5,B4,B3,B2,B1,B0} then the result will be
> + * {A3,B3,A2,B2,A1,B1,A0,B0}.
> + *
> + * Functional Description
> + *
> + * uint64_t output = 0;
> + * uint32_t counter;
> + * for (counter = 0; counter < (WORD_SIZE / BYTE_SIZE); counter++)
> + * {
> + * bool asel = ((counter & 1) == 1);
> + * int in_sel = 0 + counter / 2;
> + * int8_t srca = getByte (rf[SrcA], in_sel);
> + * int8_t srcb = getByte (rf[SrcB], in_sel);
> + * output = setByte (output, counter, (asel ? srca : srcb));
> + * }
> + * rf[Dest] = output;
> + */
> +static void gen_v1int_l(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + int count;
> + TCGv vdst = dest_gr(dc, rdst);
> + TCGv vsrc = load_gr(dc, rsrc);
> + TCGv vsrcb = load_gr(dc, rsrcb);
> + TCGv tmp = tcg_temp_new_i64();
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "v1int_l r%d, r%d, r%d\n",
> + rdst, rsrc, rsrcb);
> +
> + tcg_gen_movi_i64(vdst, 0);
> + for (count = 0; count < 4; count++) {
> + extract_v1(tmp, vsrc, count);
> + insert_v1(vdst, tmp, 2 * count + 1);
> + extract_v1(tmp, vsrcb, count);
> + insert_v1(vdst, tmp, 2 * count);
> + }
> + tcg_temp_free_i64(tmp);
> +}
> +
> +static void gen_v4int_l(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "v4int_l r%d, r%d, r%d\n",
> + rdst, rsrc, rsrcb);
> + tcg_gen_deposit_i64(dest_gr(dc, rdst), load_gr(dc, rsrcb),
> + load_gr(dc, rsrc), 32, 32);
> +}
> +
> +static void gen_cmpi(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, int8_t imm8,
> + TCGCond cond, const char *code)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, %d\n",
> + code, rdst, rsrc, imm8);
> + tcg_gen_setcondi_i64(cond,
> + dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
> +}
> +
> +static void gen_cmp(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
> + TCGCond cond, const char *code)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
> + code, rdst, rsrc, rsrcb);
> + tcg_gen_setcond_i64(cond, dest_gr(dc, rdst), load_gr(dc, rsrc),
> + load_gr(dc, rsrcb));
> +}
> +
> +static void gen_cmov(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
> + TCGCond cond, const char *code)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
> + code, rdst, rsrc, rsrcb);
> + tcg_gen_movcond_i64(cond, dest_gr(dc, rdst), load_gr(dc, rsrc),
> + load_zero(dc), load_gr(dc, rsrcb), load_gr(dc, rdst));
> +}
> +
> +static void gen_menz(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
> + TCGCond cond, const char *code)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
> + code, rdst, rsrc, rsrcb);
> +
> + tcg_gen_movcond_i64(cond, dest_gr(dc, rdst), load_gr(dc, rsrc),
> + load_zero(dc), load_gr(dc, rsrcb), load_zero(dc));
> +}
> +
> +static void gen_add(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "add r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
> + tcg_gen_add_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
> +}
> +
> +static void gen_addimm(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, int16_t imm)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "add(l)i r%d, r%d, %d\n",
> + rdst, rsrc, imm);
> + tcg_gen_addi_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), imm);
> +}
> +
> +static void gen_addxsc(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "addxsc r%d, r%d, r%d\n",
> + rdst, rsrc, rsrcb);
> + gen_helper_add_saturate(dest_gr(dc, rdst), cpu_env,
> + load_gr(dc, rsrc), load_gr(dc, rsrcb));
> +}
> +
> +static void gen_addx(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "addx r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
> + tcg_gen_add_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb));
> + tcg_gen_ext32s_i64(vdst, vdst);
> +}
> +
> +static void gen_addximm(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, int16_t imm)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "addx(l)i r%d, r%d, %d\n",
> + rdst, rsrc, imm);
> + tcg_gen_addi_i64(vdst, load_gr(dc, rsrc), imm);
> + tcg_gen_ext32s_i64(vdst, vdst);
> +}
> +
> +static void gen_sub(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "sub r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
> + tcg_gen_sub_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
> +}
> +
> +static void gen_subx(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "subx r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
> + tcg_gen_sub_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb));
> + tcg_gen_ext32s_i64(vdst, vdst);
> +}
> +
> +/*
> + * uint64_t mask = 0;
> + * int64_t background = ((rf[SrcA]>> BFEnd) & 1) ? -1ULL : 0ULL;
> + * mask = ((-1ULL) ^ ((-1ULL << ((BFEnd - BFStart) & 63)) << 1));
> + * uint64_t rot_src = (((uint64_t) rf[SrcA])>> BFStart)
> + * | (rf[SrcA] << (64 - BFStart));
> + * rf[Dest] = (rot_src & mask) | (background & ~mask);
> + */
> +static void gen_bfexts(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc,
> + uint8_t start, uint8_t end)
> +{
> + uint64_t mask = (-1ULL) ^ ((-1ULL << ((end - start) & 63)) << 1);
> + TCGv vdst = dest_gr(dc, rdst);
> + TCGv tmp = tcg_temp_new_i64();
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "bfexts r%d, r%d, %d, %d\n",
> + rdst, rsrc, start, end);
> +
> + tcg_gen_rotri_i64(vdst, load_gr(dc, rsrc), start);
> + tcg_gen_andi_i64(vdst, vdst, mask);
> +
> + tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), end);
> + tcg_gen_andi_i64(tmp, tmp, 1);
> + tcg_gen_neg_i64(tmp, tmp);
> + tcg_gen_andi_i64(tmp, tmp, ~mask);
> + tcg_gen_or_i64(vdst, vdst, tmp);
> +
> + tcg_temp_free_i64(tmp);
> +}
> +
> +/*
> + * The related functional description for bfextu in isa document:
> + *
> + * uint64_t mask = 0;
> + * mask = (-1ULL) ^ ((-1ULL << ((BFEnd - BFStart) & 63)) << 1);
> + * uint64_t rot_src = (((uint64_t) rf[SrcA])>> BFStart)
> + * | (rf[SrcA] << (64 - BFStart));
> + * rf[Dest] = rot_src & mask;
> + */
> +static void gen_bfextu(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc,
> + uint8_t start, uint8_t end)
> +{
> + uint64_t mask = (-1ULL) ^ ((-1ULL << ((end - start) & 63)) << 1);
> + TCGv vdst = dest_gr(dc, rdst);
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "bfextu r%d, r%d, %d, %d\n",
> + rdst, rsrc, start, end);
> +
> + tcg_gen_rotri_i64(vdst, load_gr(dc, rsrc), start);
> + tcg_gen_andi_i64(vdst, vdst, mask);
> +}
> +
> +/*
> + * mask = (start <= end) ? ((-1ULL << start) ^ ((-1ULL << end) << 1))
> + * : ((-1ULL << start) | (-1ULL>> (63 - end)));
> + * uint64_t rot_src = (rf[SrcA] << start)
> + * | ((uint64_t) rf[SrcA]>> (64 - start));
> + * rf[Dest] = (rot_src & mask) | (rf[Dest] & (-1ULL ^ mask));
> + */
> +static void gen_bfins(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc,
> + uint8_t start, uint8_t end)
> +{
> + uint64_t mask = (start <= end) ? ((-1ULL << start) ^ ((-1ULL << end) << 1))
> + : ((-1ULL << start) | (-1ULL>> (63 - end)));
> + TCGv vdst = dest_gr(dc, rdst);
> + TCGv tmp = tcg_temp_new_i64();
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "bfins r%d, r%d, %d, %d\n",
> + rdst, rsrc, start, end);
> +
> + tcg_gen_rotli_i64(tmp, load_gr(dc, rsrc), start);
> +
> + tcg_gen_andi_i64(tmp, tmp, mask);
> + tcg_gen_andi_i64(vdst, load_gr(dc, rdst), -1ULL ^ mask);
> + tcg_gen_or_i64(vdst, vdst, tmp);
> +
> + tcg_temp_free_i64(tmp);
> +}
> +
> +static void gen_or(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "or r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
> + tcg_gen_or_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
> +}
> +
> +static void gen_ori(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, int8_t imm8)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "ori r%d, r%d, %d\n", rdst, rsrc, imm8);
> + tcg_gen_ori_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
> +}
> +
> +static void gen_xor(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "xor r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
> + tcg_gen_xor_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
> +}
> +
> +static void gen_xori(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, int8_t imm8)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "xori r%d, r%d, %d\n", rdst, rsrc, imm8);
> + tcg_gen_xori_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
> +}
> +
> +static void gen_nor(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "nor r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
> + tcg_gen_nor_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
> +}
> +
> +static void gen_and(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "and r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
> + tcg_gen_and_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
> +}
> +
> +static void gen_andi(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, int8_t imm8)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "andi r%d, r%d, %d\n", rdst, rsrc, imm8);
> + tcg_gen_andi_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
> +}
> +
> +static void gen_mulx(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
> + bool add, const char *code)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
> + code, rdst, rsrc, rsrcb);
> +
> + tcg_gen_mul_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb));
> + if (add) {
> + tcg_gen_add_i64(vdst, load_gr(dc, rdst), vdst);
> + }
> + tcg_gen_ext32s_i64(vdst, vdst);
> +}
> +
> +static void gen_mul(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
> + bool add, bool high, bool sign,
> + bool highb, bool signb,
> + const char *code)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> + TCGv vsrc = load_gr(dc, rsrc);
> + TCGv vsrcb = load_gr(dc, rsrcb);
> + TCGv tmp = tcg_temp_new_i64();
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, r%d\n",
> + code, rdst, rsrc, rsrcb);
> +
> + if (high) {
> + tcg_gen_shri_i64(tmp, vsrc, 32);
> + } else {
> + tcg_gen_andi_i64(tmp, vsrc, 0xffffffff);
> + }
> + if (sign) {
> + tcg_gen_ext32s_i64(tmp, tmp);
> + }
> +
> + if (highb) {
> + tcg_gen_shri_i64(vdst, vsrcb, 32);
> + } else {
> + tcg_gen_andi_i64(vdst, vsrcb, 0xffffffff);
> + }
> + if (signb) {
> + tcg_gen_ext32s_i64(vdst, vdst);
> + }
> +
> + tcg_gen_mul_i64(vdst, tmp, vdst);
> +
> + if (add) {
> + tcg_gen_add_i64(vdst, load_gr(dc, rdst), vdst);
> + }
> +
> + tcg_temp_free_i64(tmp);
> +}
> +
> +static void gen_shlx(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "shlx r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
> + tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 31);
> + tcg_gen_shl_i64(vdst, load_gr(dc, rsrc), vdst);
> + tcg_gen_ext32s_i64(vdst, vdst);
> +}
> +
> +static void gen_shl(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "shl r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
> + tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 63);
> + tcg_gen_shl_i64(vdst, load_gr(dc, rsrc), vdst);
> +}
> +
> +static void gen_shli(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t shamt)
> +{
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "shli r%d, r%d, %u\n", rdst, rsrc, shamt);
> + tcg_gen_shli_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), shamt);
> +}
> +
> +static void gen_shlxi(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t shamt)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "shlxi r%d, r%d, %u\n", rdst, rsrc, shamt);
> + tcg_gen_shli_i64(vdst, load_gr(dc, rsrc), shamt & 31);
> + tcg_gen_ext32s_i64(vdst, vdst);
> +}
> +
> +static void gen_shladd(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
> + uint8_t shift, bool cast)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "shl%dadd%s r%d, r%d, r%d\n",
> + shift, cast ? "x" : "", rdst, rsrc, rsrcb);
> + tcg_gen_shli_i64(vdst, load_gr(dc, rsrc), shift);
> + tcg_gen_add_i64(vdst, vdst, load_gr(dc, rsrcb));
> + if (cast) {
> + tcg_gen_ext32s_i64(vdst, vdst);
> + }
> +}
> +
> +static void gen_shl16insli(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint16_t uimm16)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "shl16insli r%d, r%d, 0x%x\n",
> + rdst, rsrc, uimm16);
> + tcg_gen_shli_i64(vdst, load_gr(dc, rsrc), 16);
> + tcg_gen_ori_i64(vdst, vdst, uimm16);
> +}
> +
> +static void gen_shrs(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "shrs r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
> + tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 63);
> + tcg_gen_sar_i64(vdst, load_gr(dc, rsrc), vdst);
> +}
> +
> +static void gen_shrux(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> + TCGv tmp = tcg_temp_new_i64();
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "shrux r%d, r%d, r%d\n",
> + rdst, rsrc, rsrcb);
> + tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 31);
> + tcg_gen_andi_i64(tmp, load_gr(dc, rsrc), 0xffffffff);
> + tcg_gen_shr_i64(vdst, tmp, vdst);
> + tcg_gen_ext32s_i64(vdst, vdst);
> +
> + tcg_temp_free_i64(tmp);
> +}
> +
> +static void gen_shru(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "shru r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
> + tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 63);
> + tcg_gen_shr_i64(vdst, load_gr(dc, rsrc), vdst);
> +}
> +
> +static void gen_shufflebytes(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "shufflebytes r%d, r%d, r%d\n",
> + rdst, rsrc, rsrcb);
> + gen_helper_shufflebytes(dest_gr(dc, rdst), load_gr(dc, rdst),
> + load_gr(dc, rsrc), load_gr(dc, rsrcb));
> +}
> +
> +static void gen_shrsi(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t shamt)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "shrsi r%d, r%d, %u\n", rdst, rsrc, shamt);
> + tcg_gen_sari_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), shamt);
> +}
> +
> +static void gen_shrui(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t shamt)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "shrui r%d, r%d, %u\n", rdst, rsrc, shamt);
> + tcg_gen_shri_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), shamt);
> +}
> +
> +static void gen_shruxi(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t shamt)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "shruxi r%d, r%d, %u\n",
> + rdst, rsrc, shamt);
> + tcg_gen_andi_i64(vdst, load_gr(dc, rsrc), 0xffffffff);
> + tcg_gen_shri_i64(vdst, vdst, shamt & 31);
> + tcg_gen_ext32s_i64(vdst, vdst);
> +}
> +
> +static void gen_rotl(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "rotl r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
> + tcg_gen_rotl_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
> +}
> +
> +static void gen_rotli(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t shamt)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "rotli r%d, r%d, %u\n",
> + rdst, rsrc, shamt);
> + tcg_gen_rotli_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), shamt);
> +}
> +
> +static void gen_dblalign(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> + TCGv mask = tcg_temp_new_i64();
> + TCGv tmp = tcg_temp_new_i64();
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "dblalign r%d, r%d, r%d\n",
> + rdst, rsrc, rsrcb);
> +
> + tcg_gen_andi_i64(mask, load_gr(dc, rsrcb), 7);
> + tcg_gen_shli_i64(mask, mask, 3);
> + tcg_gen_shr_i64(vdst, load_gr(dc, rdst), mask);
> +
> + tcg_gen_xori_i64(mask, mask, 63);
> + tcg_gen_shl_i64(mask, load_gr(dc, rsrc), mask);
> + tcg_gen_shli_i64(mask, mask, 1);
> +
> + tcg_gen_or_i64(vdst, vdst, mask);
> +
> + tcg_temp_free_i64(tmp);
> + tcg_temp_free_i64(mask);
> +}
> +
> +static void gen_cntlz(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cntlz r%d, r%d\n", rdst, rsrc);
> + gen_helper_cntlz(dest_gr(dc, rdst), load_gr(dc, rsrc));
> +}
> +
> +static void gen_cnttz(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "ctz r%d, r%d\n", rdst, rsrc);
> + gen_helper_cnttz(dest_gr(dc, rdst), load_gr(dc, rsrc));
> +}
> +
> +static void gen_ld(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc,
> + TCGMemOp ops, const char *code)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d\n", code, rdst, rsrc);
> + tcg_gen_qemu_ld_i64(dest_gr(dc, rdst), load_gr(dc, rsrc),
> + MMU_USER_IDX, ops);
> +}
> +
> +static void gen_ld_add(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, int8_t imm8,
> + TCGMemOp ops, const char *code)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, %d\n",
> + code, rdst, rsrc, imm8);
> +
> + tcg_gen_qemu_ld_i64(dest_gr(dc, rdst), load_gr(dc, rsrc),
> + MMU_USER_IDX, ops);
> + tcg_gen_addi_i64(dest_gr(dc, rsrc), load_gr(dc, rsrc), imm8);
> +}
> +
> +static void gen_st(struct DisasContext *dc,
> + uint8_t rsrc, uint8_t rsrcb,
> + TCGMemOp ops, const char *code)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d\n", code, rsrc, rsrcb);
> + tcg_gen_qemu_st_i64(load_gr(dc, rsrcb), load_gr(dc, rsrc),
> + MMU_USER_IDX, ops);
> +}
> +
> +static void gen_st_add(struct DisasContext *dc,
> + uint8_t rsrc, uint8_t rsrcb, uint8_t imm8,
> + TCGMemOp ops, const char *code)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, %d\n",
> + code, rsrc, rsrcb, imm8);
> + tcg_gen_qemu_st_i64(load_gr(dc, rsrcb), load_gr(dc, rsrc),
> + MMU_USER_IDX, ops);
> + tcg_gen_addi_i64(dest_gr(dc, rsrc), load_gr(dc, rsrc), imm8);
> +}
> +
> +static void gen_lnk(struct DisasContext *dc, uint8_t rdst)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "lnk r%d\n", rdst);
> + tcg_gen_movi_i64(dest_gr(dc, rdst), dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
> +}
> +
> +static void gen_b(struct DisasContext *dc,
> + uint8_t rsrc, int32_t off, TCGCond cond, const char *code)
> +{
> + uint64_t pos = dc->pc + (int64_t)off * TILEGX_BUNDLE_SIZE_IN_BYTES;
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, %d ([" TARGET_FMT_lx "] %s)\n",
> + code, rsrc, off, pos, lookup_symbol(pos));
> +
> + dc->jmp.dest = tcg_temp_new_i64();
> + dc->jmp.val1 = tcg_temp_new_i64();
> + dc->jmp.val2 = tcg_temp_new_i64();
> +
> + dc->jmp.cond = cond;
> + tcg_gen_movi_i64(dc->jmp.dest, pos);
> + tcg_gen_mov_i64(dc->jmp.val1, load_gr(dc, rsrc));
> + tcg_gen_movi_i64(dc->jmp.val2, 0);
> +}
> +
> +static void gen_blb(struct DisasContext *dc, uint8_t rsrc, int32_t off,
> + TCGCond cond, const char *code)
> +{
> + uint64_t pos = dc->pc + (int64_t)off * TILEGX_BUNDLE_SIZE_IN_BYTES;
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, %d ([" TARGET_FMT_lx "] %s)\n",
> + code, rsrc, off, pos, lookup_symbol(pos));
> +
> + dc->jmp.dest = tcg_temp_new_i64();
> + dc->jmp.val1 = tcg_temp_new_i64();
> + dc->jmp.val2 = tcg_temp_new_i64();
> +
> + dc->jmp.cond = cond;
> + tcg_gen_movi_i64(dc->jmp.dest, pos);
> + tcg_gen_mov_i64(dc->jmp.val1, load_gr(dc, rsrc));
> + tcg_gen_andi_i64(dc->jmp.val1, dc->jmp.val1, 1ULL);
> + tcg_gen_movi_i64(dc->jmp.val2, 0);
> +}
> +
> +/* For memory fence */
> +static void gen_mf(struct DisasContext *dc)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "mf\n");
> + /* FIXME: Do we need any implementation for it? I guess no. */
> +}
> +
> +/* Write hitt 64 bytes. It is about cache. */
> +static void gen_wh64(struct DisasContext *dc, uint8_t rsrc)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "wh64 r%d\n", rsrc);
> + /* FIXME: Do we need any implementation for it? I guess no. */
> +}
> +
> +static void gen_jr(struct DisasContext *dc, uint8_t rsrc)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "jr(p) r%d\n", rsrc);
> +
> + dc->jmp.dest = tcg_temp_new_i64();
> +
> + dc->jmp.cond = TCG_COND_ALWAYS;
> + tcg_gen_andi_i64(dc->jmp.dest, load_gr(dc, rsrc), ~(sizeof(uint64_t) - 1));
> +}
> +
> +static void gen_jalr(struct DisasContext *dc, uint8_t rsrc)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "jalr(p) r%d\n", rsrc);
> +
> + dc->jmp.dest = tcg_temp_new_i64();
> + tcg_gen_movi_i64(dest_gr(dc, TILEGX_R_LR),
> + dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
> +
> + dc->jmp.cond = TCG_COND_ALWAYS;
> + tcg_gen_andi_i64(dc->jmp.dest, load_gr(dc, rsrc), ~(sizeof(uint64_t) - 1));
> +}
> +
> +static void gen_j(struct DisasContext *dc, int off)
> +{
> + uint64_t pos = dc->pc + (int64_t)off * TILEGX_BUNDLE_SIZE_IN_BYTES;
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "j %d ([" TARGET_FMT_lx "] %s)\n",
> + off, pos, lookup_symbol(pos));
> +
> + dc->jmp.dest = tcg_temp_new_i64();
> +
> + dc->jmp.cond = TCG_COND_ALWAYS;
> + tcg_gen_movi_i64(dc->jmp.dest, pos);
> +}
> +
> +static void gen_jal(struct DisasContext *dc, int off)
> +{
> + uint64_t pos = dc->pc + (int64_t)off * TILEGX_BUNDLE_SIZE_IN_BYTES;
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "jal %d ([" TARGET_FMT_lx "] %s)\n",
> + off, pos, lookup_symbol(pos));
> +
> + dc->jmp.dest = tcg_temp_new_i64();
> + tcg_gen_movi_i64(dest_gr(dc, TILEGX_R_LR),
> + dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
> +
> + dc->jmp.cond = TCG_COND_ALWAYS;
> + tcg_gen_movi_i64(dc->jmp.dest, pos);
> +}
> +
> +static void decode_rrr_0_opcode_y0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y0(bundle);
> + uint8_t rsrcb = get_SrcB_Y0(bundle);
> + uint8_t rdst = get_Dest_Y0(bundle);
> +
> + switch (get_RRROpcodeExtension_Y0(bundle)) {
> + case ADD_RRR_0_OPCODE_Y0:
> + gen_add(dc, rdst, rsrc, rsrcb);
> + return;
> + case ADDX_RRR_0_OPCODE_Y0:
> + gen_addx(dc, rdst, rsrc, rsrcb);
> + return;
> + case SUBX_RRR_0_OPCODE_Y0:
> + gen_subx(dc, rdst, rsrc, rsrcb);
> + return;
> + case SUB_RRR_0_OPCODE_Y0:
> + gen_sub(dc, rdst, rsrc, rsrcb);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_u_opcode_ex_y0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y0(bundle);
> + uint8_t rdst = get_Dest_Y0(bundle);
> +
> + switch (get_UnaryOpcodeExtension_Y0(bundle)) {
> + case CNTLZ_UNARY_OPCODE_Y0:
> + gen_cntlz(dc, rdst, rsrc);
> + return;
> + case CNTTZ_UNARY_OPCODE_Y0:
> + gen_cnttz(dc, rdst, rsrc);
> + return;
> + case FNOP_UNARY_OPCODE_Y0:
> + case NOP_UNARY_OPCODE_Y0:
> + if (!rsrc && !rdst) {
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "(f)nop\n");
> + return;
> + }
> + /* Fall through */
> + case FSINGLE_PACK1_UNARY_OPCODE_Y0:
> + case PCNT_UNARY_OPCODE_Y0:
> + case REVBITS_UNARY_OPCODE_Y0:
> + case REVBYTES_UNARY_OPCODE_Y0:
> + case TBLIDXB0_UNARY_OPCODE_Y0:
> + case TBLIDXB1_UNARY_OPCODE_Y0:
> + case TBLIDXB2_UNARY_OPCODE_Y0:
> + case TBLIDXB3_UNARY_OPCODE_Y0:
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP decode_u_opcode_ex_y0, [" FMT64X "]\n", bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_rrr_1_opcode_y0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y0(bundle);
> + uint8_t rsrcb = get_SrcB_Y0(bundle);
> + uint8_t rdst = get_Dest_Y0(bundle);
> +
> + switch (get_RRROpcodeExtension_Y0(bundle)) {
> + case UNARY_RRR_1_OPCODE_Y0:
> + return decode_u_opcode_ex_y0(dc, bundle);
> + case SHL1ADD_RRR_1_OPCODE_Y0:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 1, false);
> + return;
> + case SHL2ADD_RRR_1_OPCODE_Y0:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 2, false);
> + return;
> + case SHL3ADD_RRR_1_OPCODE_Y0:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 3, false);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_rrr_2_opcode_y0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y0(bundle);
> + uint8_t rsrcb = get_SrcB_Y0(bundle);
> + uint8_t rdst = get_Dest_Y0(bundle);
> +
> + switch (get_RRROpcodeExtension_Y0(bundle)) {
> + case CMPLES_RRR_2_OPCODE_Y0:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE, "cmples");
> + return;
> + case CMPLEU_RRR_2_OPCODE_Y0:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU, "cmpleu");
> + return;
> + case CMPLTS_RRR_2_OPCODE_Y0:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT, "cmplts");
> + return;
> + case CMPLTU_RRR_2_OPCODE_Y0:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU, "cmpltu");
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_rrr_3_opcode_y0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y0(bundle);
> + uint8_t rsrcb = get_SrcB_Y0(bundle);
> + uint8_t rdst = get_Dest_Y0(bundle);
> +
> + switch (get_RRROpcodeExtension_Y0(bundle)) {
> + case CMPEQ_RRR_3_OPCODE_Y0:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "cmpeq");
> + return;
> + case CMPNE_RRR_3_OPCODE_Y0:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "cmpne");
> + return;
> + case MULAX_RRR_3_OPCODE_Y0:
> + gen_mulx(dc, rdst, rsrc, rsrcb, true, "mulax");
> + return;
> + case MULX_RRR_3_OPCODE_Y0:
> + gen_mulx(dc, rdst, rsrc, rsrcb, false, "mulx");
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_rrr_4_opcode_y0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y0(bundle);
> + uint8_t rsrcb = get_SrcB_Y0(bundle);
> + uint8_t rdst = get_Dest_Y0(bundle);
> +
> + switch (get_RRROpcodeExtension_Y0(bundle)) {
> + case CMOVNEZ_RRR_4_OPCODE_Y0:
> + gen_cmov(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "cmovnez");
> + return;
> + case CMOVEQZ_RRR_4_OPCODE_Y0:
> + gen_cmov(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "cmoveqz");
> + return;
> + case MNZ_RRR_4_OPCODE_Y0:
> + gen_menz(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "mnz");
> + return;
> + case MZ_RRR_4_OPCODE_Y0:
> + gen_menz(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "mz");
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_rrr_5_opcode_y0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y0(bundle);
> + uint8_t rsrcb = get_SrcB_Y0(bundle);
> + uint8_t rdst = get_Dest_Y0(bundle);
> +
> + switch (get_RRROpcodeExtension_Y0(bundle)) {
> + case OR_RRR_5_OPCODE_Y0:
> + gen_or(dc, rdst, rsrc, rsrcb);
> + return;
> + case AND_RRR_5_OPCODE_Y0:
> + gen_and(dc, rdst, rsrc, rsrcb);
> + return;
> + case NOR_RRR_5_OPCODE_Y0:
> + gen_nor(dc, rdst, rsrc, rsrcb);
> + return;
> + case XOR_RRR_5_OPCODE_Y0:
> + gen_xor(dc, rdst, rsrc, rsrcb);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_rrr_6_opcode_y0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y0(bundle);
> + uint8_t rsrcb = get_SrcB_Y0(bundle);
> + uint8_t rdst = get_Dest_Y0(bundle);
> +
> + switch (get_RRROpcodeExtension_Y0(bundle)) {
> + case ROTL_RRR_6_OPCODE_Y0:
> + gen_rotl(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHL_RRR_6_OPCODE_Y0:
> + gen_shl(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHRS_RRR_6_OPCODE_Y0:
> + gen_shrs(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHRU_RRR_6_OPCODE_Y0:
> + gen_shru(dc, rdst, rsrc, rsrcb);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_rrr_9_opcode_y0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y0(bundle);
> + uint8_t rsrcb = get_SrcB_Y0(bundle);
> + uint8_t rdst = get_Dest_Y0(bundle);
> +
> + switch (get_RRROpcodeExtension_Y0(bundle)) {
> + case MULA_HU_HU_RRR_9_OPCODE_Y0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, true, false, true, false, "mula_hu_hu");
> + return;
> + case MULA_LU_LU_RRR_9_OPCODE_Y0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, false, false, false, false, "mula_lu_lu");
> + return;
> + case MULA_HS_HS_RRR_9_OPCODE_Y0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, true, true, true, true, "mula_hs_hs");
> + return;
> + case MULA_LS_LS_RRR_9_OPCODE_Y0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, false, true, false, true, "mula_ls_ls");
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_shift_opcode_y0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y0(bundle);
> + uint8_t shamt = get_ShAmt_Y0(bundle);
> + uint8_t rdst = get_Dest_Y0(bundle);
> +
> + switch (get_ShiftOpcodeExtension_Y0(bundle)) {
> + case ROTLI_SHIFT_OPCODE_Y0:
> + gen_rotli(dc, rdst, rsrc, shamt);
> + return;
> + case SHLI_SHIFT_OPCODE_Y0:
> + gen_shli(dc, rdst, rsrc, shamt);
> + return;
> + case SHRUI_SHIFT_OPCODE_Y0:
> + gen_shrui(dc, rdst, rsrc, shamt);
> + return;
> + case SHRSI_SHIFT_OPCODE_Y0:
> + gen_shrsi(dc, rdst, rsrc, shamt);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_rrr_0_opcode_y1(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y1(bundle);
> + uint8_t rsrcb = get_SrcB_Y1(bundle);
> + uint8_t rdst = get_Dest_Y1(bundle);
> +
> + switch (get_RRROpcodeExtension_Y1(bundle)) {
> + case ADDX_SPECIAL_0_OPCODE_Y1:
> + gen_addx(dc, rdst, rsrc, rsrcb);
> + return;
> + case ADD_SPECIAL_0_OPCODE_Y1:
> + gen_add(dc, rdst, rsrc, rsrcb);
> + return;
> + case SUBX_RRR_0_OPCODE_Y1:
> + gen_subx(dc, rdst, rsrc, rsrcb);
> + return;
> + case SUB_RRR_0_OPCODE_Y1:
> + gen_sub(dc, rdst, rsrc, rsrcb);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_u_opcode_ex_y1(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y1(bundle);
> + uint8_t rdst = get_Dest_Y1(bundle);
> +
> + switch (get_UnaryOpcodeExtension_Y1(bundle)) {
> + case NOP_UNARY_OPCODE_Y1:
> + case FNOP_UNARY_OPCODE_Y1:
> + if (!rsrc && !rdst) {
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "(f)nop\n");
> + return;
> + }
> + break;
> + case JALRP_UNARY_OPCODE_Y1:
> + case JALR_UNARY_OPCODE_Y1:
> + if (!rdst) {
> + gen_jalr(dc, rsrc);
> + return;
> + }
> + break;
> + case JR_UNARY_OPCODE_Y1:
> + case JRP_UNARY_OPCODE_Y1:
> + if (!rdst) {
> + gen_jr(dc, rsrc);
> + return;
> + }
> + break;
> + case LNK_UNARY_OPCODE_Y1:
> + if (!rsrc) {
> + gen_lnk(dc, rdst);
> + return;
> + }
> + break;
> + case ILL_UNARY_OPCODE_Y1:
> + break;
> + default:
> + g_assert_not_reached();
> + }
> +
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP decode_u_opcode_ex_y1, [" FMT64X "]\n", bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> +}
> +
> +static void decode_rrr_1_opcode_y1(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y1(bundle);
> + uint8_t rsrcb = get_SrcB_Y1(bundle);
> + uint8_t rdst = get_Dest_Y1(bundle);
> +
> + switch (get_RRROpcodeExtension_Y1(bundle)) {
> + case UNARY_RRR_1_OPCODE_Y1:
> + return decode_u_opcode_ex_y1(dc, bundle);
> + case SHL1ADD_RRR_1_OPCODE_Y1:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 1, false);
> + return;
> + case SHL2ADD_RRR_1_OPCODE_Y1:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 2, false);
> + return;
> + case SHL3ADD_RRR_1_OPCODE_Y1:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 3, false);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_rrr_2_opcode_y1(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y1(bundle);
> + uint8_t rsrcb = get_SrcB_Y1(bundle);
> + uint8_t rdst = get_Dest_Y1(bundle);
> +
> + switch (get_RRROpcodeExtension_Y1(bundle)) {
> + case CMPLES_RRR_2_OPCODE_Y1:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE, "cmples");
> + return;
> + case CMPLEU_RRR_2_OPCODE_Y1:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU, "cmpleu");
> + return;
> + case CMPLTS_RRR_2_OPCODE_Y1:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT, "cmplts");
> + return;
> + case CMPLTU_RRR_2_OPCODE_Y1:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU, "cmpltu");
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_rrr_3_opcode_y1(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y1(bundle);
> + uint8_t rsrcb = get_SrcB_Y1(bundle);
> + uint8_t rdst = get_Dest_Y1(bundle);
> +
> + switch (get_RRROpcodeExtension_Y1(bundle)) {
> + case CMPEQ_RRR_3_OPCODE_Y1:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "cmpeq");
> + return;
> + case CMPNE_RRR_3_OPCODE_Y1:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "cmpne");
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_rrr_5_opcode_y1(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y1(bundle);
> + uint8_t rsrcb = get_SrcB_Y1(bundle);
> + uint8_t rdst = get_Dest_Y1(bundle);
> +
> + switch (get_RRROpcodeExtension_Y1(bundle)) {
> + case OR_RRR_5_OPCODE_Y1:
> + gen_or(dc, rdst, rsrc, rsrcb);
> + return;
> + case AND_RRR_5_OPCODE_Y1:
> + gen_and(dc, rdst, rsrc, rsrcb);
> + return;
> + case NOR_RRR_5_OPCODE_Y1:
> + gen_nor(dc, rdst, rsrc, rsrcb);
> + return;
> + case XOR_RRR_5_OPCODE_Y1:
> + gen_xor(dc, rdst, rsrc, rsrcb);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_shift_opcode_y1(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y1(bundle);
> + uint8_t rdst = get_Dest_Y1(bundle);
> + uint8_t shamt = get_ShAmt_Y1(bundle);
> +
> + switch (get_RRROpcodeExtension_Y1(bundle)) {
> + case ROTLI_SHIFT_OPCODE_Y1:
> + gen_rotli(dc, rdst, rsrc, shamt);
> + return;
> + case SHLI_SHIFT_OPCODE_Y1:
> + gen_shli(dc, rdst, rsrc, shamt);
> + return;
> + case SHRSI_SHIFT_OPCODE_Y1:
> + gen_shrsi(dc, rdst, rsrc, shamt);
> + return;
> + case SHRUI_SHIFT_OPCODE_Y1:
> + gen_shrui(dc, rdst, rsrc, shamt);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_ldst0_opcode_y2(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrca = get_SrcA_Y2(bundle);
> + uint8_t rsrcbdst = get_SrcBDest_Y2(bundle);
> +
> + switch (get_Mode(bundle)) {
> + case MODE_OPCODE_YA2:
> + gen_ld(dc, rsrcbdst, rsrca, MO_SB, "ld1s");
> + return;
> + case MODE_OPCODE_YC2:
> + gen_st(dc, rsrca, rsrcbdst, MO_UB, "st1");
> + return;
> + case MODE_OPCODE_YB2:
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP ldst0_opcode_y2, [" FMT64X "]\n", bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_ldst1_opcode_y2(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = (uint8_t)get_SrcA_Y2(bundle);
> + uint8_t rsrcbdst = (uint8_t)get_SrcBDest_Y2(bundle);
> +
> + switch (get_Mode(bundle)) {
> + case MODE_OPCODE_YA2:
> + if (rsrcbdst == TILEGX_R_ZERO) {
> + /* Need nothing */
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "prefetch r%d\n", rsrc);
> + return;
> + }
> + gen_ld(dc, rsrcbdst, rsrc, MO_UB, "ld1u");
> + return;
> + case MODE_OPCODE_YB2:
> + gen_ld(dc, rsrcbdst, rsrc, MO_LESL, "ld4s");
> + return;
> + case MODE_OPCODE_YC2:
> + gen_st(dc, rsrc, rsrcbdst, MO_LEUW, "st2");
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_ldst2_opcode_y2(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_Y2(bundle);
> + uint8_t rsrcbdst = get_SrcBDest_Y2(bundle);
> +
> + switch (get_Mode(bundle)) {
> + case MODE_OPCODE_YC2:
> + gen_st(dc, rsrc, rsrcbdst, MO_LEUL, "st4");
> + return;
> + case MODE_OPCODE_YB2:
> + gen_ld(dc, rsrcbdst, rsrc, MO_LEUL, "ld4u");
> + return;
> + case MODE_OPCODE_YA2:
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP ldst2_opcode_y2, [" FMT64X "]\n", bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_ldst3_opcode_y2(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrca = get_SrcA_Y2(bundle);
> + uint8_t rsrcbdst = get_SrcBDest_Y2(bundle);
> +
> + switch (get_Mode(bundle)) {
> + case MODE_OPCODE_YA2:
> + gen_ld(dc, rsrcbdst, rsrca, MO_LEUW, "ld2u");
> + return;
> + case MODE_OPCODE_YB2:
> + gen_ld(dc, rsrcbdst, rsrca, MO_LEQ, "ld(na)");
> + return;
> + case MODE_OPCODE_YC2:
> + gen_st(dc, rsrca, rsrcbdst, MO_LEQ, "st");
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_bf_opcode_x0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_X0(bundle);
> + uint8_t rdst = get_Dest_X0(bundle);
> + uint8_t start = get_BFStart_X0(bundle);
> + uint8_t end = get_BFEnd_X0(bundle);
> +
> + switch (get_BFOpcodeExtension_X0(bundle)) {
> + case BFEXTS_BF_OPCODE_X0:
> + gen_bfexts(dc, rdst, rsrc, start, end);
> + return;
> + case BFEXTU_BF_OPCODE_X0:
> + gen_bfextu(dc, rdst, rsrc, start, end);
> + return;
> + case BFINS_BF_OPCODE_X0:
> + gen_bfins(dc, rdst, rsrc, start, end);
> + return;
> + case MM_BF_OPCODE_X0:
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP bf_opcode_x0, [" FMT64X "]\n", bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_imm8_opcode_x0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_X0(bundle);
> + uint8_t rdst = get_Dest_X0(bundle);
> + int8_t imm8 = get_Imm8_X0(bundle);
> +
> + switch (get_Imm8OpcodeExtension_X0(bundle)) {
> + case ADDI_IMM8_OPCODE_X0:
> + gen_addimm(dc, rdst, rsrc, imm8);
> + return;
> + case ADDXI_IMM8_OPCODE_X0:
> + gen_addximm(dc, rdst, rsrc, imm8);
> + return;
> + case ANDI_IMM8_OPCODE_X0:
> + gen_andi(dc, rdst, rsrc, imm8);
> + return;
> + case CMPEQI_IMM8_OPCODE_X0:
> + gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_EQ, "cmpeqi");
> + return;
> + case CMPLTSI_IMM8_OPCODE_X0:
> + gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_LT, "cmpltsi");
> + return;
> + case CMPLTUI_IMM8_OPCODE_X0:
> + gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_LTU, "cmpltui");
> + return;
> + case ORI_IMM8_OPCODE_X0:
> + gen_ori(dc, rdst, rsrc, imm8);
> + return;
> + case V1CMPEQI_IMM8_OPCODE_X0:
> + gen_v1cmpi(dc, rdst, rsrc, imm8, TCG_COND_EQ, "v1cmpeqi");
> + return;
> + case V1CMPLTSI_IMM8_OPCODE_X0:
> + gen_v1cmpi(dc, rdst, rsrc, imm8, TCG_COND_LT, "v1cmpltsi");
> + return;
> + case V1CMPLTUI_IMM8_OPCODE_X0:
> + gen_v1cmpi(dc, rdst, rsrc, imm8, TCG_COND_LTU, "v1cmpltui");
> + return;
> + case XORI_IMM8_OPCODE_X0:
> + gen_xori(dc, rdst, rsrc, imm8);
> + return;
> + case V1ADDI_IMM8_OPCODE_X0:
> + case V1MAXUI_IMM8_OPCODE_X0:
> + case V1MINUI_IMM8_OPCODE_X0:
> + case V2ADDI_IMM8_OPCODE_X0:
> + case V2CMPEQI_IMM8_OPCODE_X0:
> + case V2CMPLTSI_IMM8_OPCODE_X0:
> + case V2CMPLTUI_IMM8_OPCODE_X0:
> + case V2MAXSI_IMM8_OPCODE_X0:
> + case V2MINSI_IMM8_OPCODE_X0:
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP imm8_opcode_x0, [" FMT64X "]\n", bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_u_opcode_ex_x0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_X0(bundle);
> + uint8_t rdst = get_Dest_X0(bundle);
> +
> + switch (get_UnaryOpcodeExtension_X0(bundle)) {
> + case CNTLZ_UNARY_OPCODE_X0:
> + gen_cntlz(dc, rdst, rsrc);
> + return;
> + case CNTTZ_UNARY_OPCODE_X0:
> + gen_cnttz(dc, rdst, rsrc);
> + return;
> + case FNOP_UNARY_OPCODE_X0:
> + case NOP_UNARY_OPCODE_X0:
> + if (!rsrc && !rdst) {
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "(f)nop\n");
> + return;
> + }
> + /* Fall through */
> + case FSINGLE_PACK1_UNARY_OPCODE_X0:
> + case PCNT_UNARY_OPCODE_X0:
> + case REVBITS_UNARY_OPCODE_X0:
> + case REVBYTES_UNARY_OPCODE_X0:
> + case TBLIDXB0_UNARY_OPCODE_X0:
> + case TBLIDXB1_UNARY_OPCODE_X0:
> + case TBLIDXB2_UNARY_OPCODE_X0:
> + case TBLIDXB3_UNARY_OPCODE_X0:
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP decode_u_opcode_ex_x0, [" FMT64X "]\n", bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_rrr_0_opcode_x0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_X0(bundle);
> + uint8_t rsrcb = get_SrcB_X0(bundle);
> + uint8_t rdst = get_Dest_X0(bundle);
> +
> + switch (get_RRROpcodeExtension_X0(bundle)) {
> + case ADD_RRR_0_OPCODE_X0:
> + gen_add(dc, rdst, rsrc, rsrcb);
> + return;
> + case ADDX_RRR_0_OPCODE_X0:
> + gen_addx(dc, rdst, rsrc, rsrcb);
> + return;
> + case ADDXSC_RRR_0_OPCODE_X0:
> + gen_addxsc(dc, rdst, rsrc, rsrcb);
> + return;
> + case AND_RRR_0_OPCODE_X0:
> + gen_and(dc, rdst, rsrc, rsrcb);
> + return;
> + case CMOVEQZ_RRR_0_OPCODE_X0:
> + gen_cmov(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "cmoveqz");
> + return;
> + case CMOVNEZ_RRR_0_OPCODE_X0:
> + gen_cmov(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "cmovnez");
> + return;
> + case CMPEQ_RRR_0_OPCODE_X0:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "cmpeq");
> + return;
> + case CMPLES_RRR_0_OPCODE_X0:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE, "cmples");
> + return;
> + case CMPLEU_RRR_0_OPCODE_X0:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU, "cmpleu");
> + return;
> + case CMPLTS_RRR_0_OPCODE_X0:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT, "cmplts");
> + return;
> + case CMPLTU_RRR_0_OPCODE_X0:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU, "cmpltu");
> + return;
> + case CMPNE_RRR_0_OPCODE_X0:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "cmpne");
> + return;
> + case DBLALIGN_RRR_0_OPCODE_X0:
> + gen_dblalign(dc, rdst, rsrc, rsrcb);
> + return;
> + case MNZ_RRR_0_OPCODE_X0:
> + gen_menz(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "mnz");
> + return;
> + case MZ_RRR_0_OPCODE_X0:
> + gen_menz(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "mz");
> + return;
> + case MULAX_RRR_0_OPCODE_X0:
> + gen_mulx(dc, rdst, rsrc, rsrcb, true, "mulax");
> + return;
> + case MULX_RRR_0_OPCODE_X0:
> + gen_mulx(dc, rdst, rsrc, rsrcb, false, "mulx");
> + return;
> + case MULA_HS_HS_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, true, true, true, true, "mula_hs_hs");
> + return;
> + case MULA_HS_HU_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, true, true, true, false, "mula_hs_hu");
> + return;
> + case MULA_HS_LS_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, true, true, false, true, "mula_hs_ls");
> + return;
> + case MULA_HS_LU_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, true, true, false, false, "mula_hs_lu");
> + return;
> + case MULA_HU_LS_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, true, false, false, true, "mula_hu_ls");
> + return;
> + case MULA_HU_HU_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, true, false, true, false, "mula_hu_hu");
> + return;
> + case MULA_HU_LU_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, true, false, false, false, "mula_hu_lu");
> + return;
> + case MULA_LS_LS_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, false, true, false, true, "mula_ls_ls");
> + return;
> + case MULA_LS_LU_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, false, true, false, false, "mula_ls_lu");
> + return;
> + case MULA_LU_LU_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + true, false, false, false, false, "mula_lu_lu");
> + return;
> + case MUL_HS_HS_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + false, true, true, true, true, "mul_hs_hs");
> + return;
> + case MUL_HS_HU_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + false, true, true, true, false, "mul_hs_hu");
> + return;
> + case MUL_HS_LS_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + false, true, true, false, true, "mul_hs_ls");
> + return;
> + case MUL_HS_LU_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + false, true, true, false, false, "mul_hs_lu");
> + return;
> + case MUL_HU_LS_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + false, true, false, false, true, "mul_hu_ls");
> + return;
> + case MUL_HU_HU_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + false, true, false, true, false, "mul_hu_hu");
> + return;
> + case MUL_HU_LU_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + false, true, false, false, false, "mul_hu_lu");
> + return;
> + case MUL_LS_LS_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + false, false, true, false, true, "mul_ls_ls");
> + return;
> + case MUL_LS_LU_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + false, false, true, false, false, "mul_ls_lu");
> + return;
> + case MUL_LU_LU_RRR_0_OPCODE_X0:
> + gen_mul(dc, rdst, rsrc, rsrcb,
> + false, false, false, false, false, "mul_lu_lu");
> + return;
> + case NOR_RRR_0_OPCODE_X0:
> + gen_nor(dc, rdst, rsrc, rsrcb);
> + return;
> + case OR_RRR_0_OPCODE_X0:
> + gen_or(dc, rdst, rsrc, rsrcb);
> + return;
> + case ROTL_RRR_0_OPCODE_X0:
> + gen_rotl(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHL_RRR_0_OPCODE_X0:
> + gen_shl(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHL1ADDX_RRR_0_OPCODE_X0:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 1, true);
> + return;
> + case SHL1ADD_RRR_0_OPCODE_X0:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 1, false);
> + return;
> + case SHL2ADDX_RRR_0_OPCODE_X0:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 2, true);
> + return;
> + case SHL2ADD_RRR_0_OPCODE_X0:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 2, false);
> + return;
> + case SHL3ADDX_RRR_0_OPCODE_X0:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 3, true);
> + return;
> + case SHL3ADD_RRR_0_OPCODE_X0:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 3, false);
> + return;
> + case SHLX_RRR_0_OPCODE_X0:
> + gen_shlx(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHRS_RRR_0_OPCODE_X0:
> + gen_shrs(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHRUX_RRR_0_OPCODE_X0:
> + gen_shrux(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHRU_RRR_0_OPCODE_X0:
> + gen_shru(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHUFFLEBYTES_RRR_0_OPCODE_X0:
> + gen_shufflebytes(dc, rdst, rsrc, rsrcb);
> + return;
> + case SUBX_RRR_0_OPCODE_X0:
> + gen_subx(dc, rdst, rsrc, rsrcb);
> + return;
> + case SUB_RRR_0_OPCODE_X0:
> + gen_sub(dc, rdst, rsrc, rsrcb);
> + return;
> + case UNARY_RRR_0_OPCODE_X0:
> + return decode_u_opcode_ex_x0(dc, bundle);
> + case V1INT_L_RRR_0_OPCODE_X0:
> + gen_v1int_l(dc, rdst, rsrc, rsrcb);
> + return;
> + case V4INT_L_RRR_0_OPCODE_X0:
> + gen_v4int_l(dc, rdst, rsrc, rsrcb);
> + return;
> + case V1CMPEQ_RRR_0_OPCODE_X0:
> + gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "v1cmpeq");
> + return;
> + case V1CMPLES_RRR_0_OPCODE_X0:
> + gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE, "v1cmples");
> + return;
> + case V1CMPLEU_RRR_0_OPCODE_X0:
> + gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU, "v1cmpleu");
> + return;
> + case V1CMPLTS_RRR_0_OPCODE_X0:
> + gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT, "v1cmplts");
> + return;
> + case V1CMPLTU_RRR_0_OPCODE_X0:
> + gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU, "v1cmpltu");
> + return;
> + case V1CMPNE_RRR_0_OPCODE_X0:
> + gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "v1cmpne");
> + return;
> + case XOR_RRR_0_OPCODE_X0:
> + gen_xor(dc, rdst, rsrc, rsrcb);
> + return;
> + case CMULAF_RRR_0_OPCODE_X0:
> + case CMULA_RRR_0_OPCODE_X0:
> + case CMULFR_RRR_0_OPCODE_X0:
> + case CMULF_RRR_0_OPCODE_X0:
> + case CMULHR_RRR_0_OPCODE_X0:
> + case CMULH_RRR_0_OPCODE_X0:
> + case CMUL_RRR_0_OPCODE_X0:
> + case CRC32_32_RRR_0_OPCODE_X0:
> + case CRC32_8_RRR_0_OPCODE_X0:
> + case DBLALIGN2_RRR_0_OPCODE_X0:
> + case DBLALIGN4_RRR_0_OPCODE_X0:
> + case DBLALIGN6_RRR_0_OPCODE_X0:
> + case FDOUBLE_ADDSUB_RRR_0_OPCODE_X0:
> + case FDOUBLE_ADD_FLAGS_RRR_0_OPCODE_X0:
> + case FDOUBLE_MUL_FLAGS_RRR_0_OPCODE_X0:
> + case FDOUBLE_PACK1_RRR_0_OPCODE_X0:
> + case FDOUBLE_PACK2_RRR_0_OPCODE_X0:
> + case FDOUBLE_SUB_FLAGS_RRR_0_OPCODE_X0:
> + case FDOUBLE_UNPACK_MAX_RRR_0_OPCODE_X0:
> + case FDOUBLE_UNPACK_MIN_RRR_0_OPCODE_X0:
> + case FSINGLE_ADD1_RRR_0_OPCODE_X0:
> + case FSINGLE_ADDSUB2_RRR_0_OPCODE_X0:
> + case FSINGLE_MUL1_RRR_0_OPCODE_X0:
> + case FSINGLE_MUL2_RRR_0_OPCODE_X0:
> + case FSINGLE_PACK2_RRR_0_OPCODE_X0:
> + case FSINGLE_SUB1_RRR_0_OPCODE_X0:
> + case SUBXSC_RRR_0_OPCODE_X0:
> + case V1ADDUC_RRR_0_OPCODE_X0:
> + case V1ADD_RRR_0_OPCODE_X0:
> + case V1ADIFFU_RRR_0_OPCODE_X0:
> + case V1AVGU_RRR_0_OPCODE_X0:
> + case V1DDOTPUSA_RRR_0_OPCODE_X0:
> + case V1DDOTPUS_RRR_0_OPCODE_X0:
> + case V1DOTPA_RRR_0_OPCODE_X0:
> + case V1DOTPUSA_RRR_0_OPCODE_X0:
> + case V1DOTPUS_RRR_0_OPCODE_X0:
> + case V1DOTP_RRR_0_OPCODE_X0:
> + case V1MAXU_RRR_0_OPCODE_X0:
> + case V1MINU_RRR_0_OPCODE_X0:
> + case V1MNZ_RRR_0_OPCODE_X0:
> + case V1MULTU_RRR_0_OPCODE_X0:
> + case V1MULUS_RRR_0_OPCODE_X0:
> + case V1MULU_RRR_0_OPCODE_X0:
> + case V1MZ_RRR_0_OPCODE_X0:
> + case V1SADAU_RRR_0_OPCODE_X0:
> + case V1SADU_RRR_0_OPCODE_X0:
> + case V1SHL_RRR_0_OPCODE_X0:
> + case V1SHRS_RRR_0_OPCODE_X0:
> + case V1SHRU_RRR_0_OPCODE_X0:
> + case V1SUBUC_RRR_0_OPCODE_X0:
> + case V1SUB_RRR_0_OPCODE_X0:
> + case V1INT_H_RRR_0_OPCODE_X0:
> + case V2INT_H_RRR_0_OPCODE_X0:
> + case V2INT_L_RRR_0_OPCODE_X0:
> + case V4INT_H_RRR_0_OPCODE_X0:
> + case V2ADDSC_RRR_0_OPCODE_X0:
> + case V2ADD_RRR_0_OPCODE_X0:
> + case V2ADIFFS_RRR_0_OPCODE_X0:
> + case V2AVGS_RRR_0_OPCODE_X0:
> + case V2CMPEQ_RRR_0_OPCODE_X0:
> + case V2CMPLES_RRR_0_OPCODE_X0:
> + case V2CMPLEU_RRR_0_OPCODE_X0:
> + case V2CMPLTS_RRR_0_OPCODE_X0:
> + case V2CMPLTU_RRR_0_OPCODE_X0:
> + case V2CMPNE_RRR_0_OPCODE_X0:
> + case V2DOTPA_RRR_0_OPCODE_X0:
> + case V2DOTP_RRR_0_OPCODE_X0:
> + case V2MAXS_RRR_0_OPCODE_X0:
> + case V2MINS_RRR_0_OPCODE_X0:
> + case V2MNZ_RRR_0_OPCODE_X0:
> + case V2MULFSC_RRR_0_OPCODE_X0:
> + case V2MULS_RRR_0_OPCODE_X0:
> + case V2MULTS_RRR_0_OPCODE_X0:
> + case V2MZ_RRR_0_OPCODE_X0:
> + case V2PACKH_RRR_0_OPCODE_X0:
> + case V2PACKL_RRR_0_OPCODE_X0:
> + case V2PACKUC_RRR_0_OPCODE_X0:
> + case V2SADAS_RRR_0_OPCODE_X0:
> + case V2SADAU_RRR_0_OPCODE_X0:
> + case V2SADS_RRR_0_OPCODE_X0:
> + case V2SADU_RRR_0_OPCODE_X0:
> + case V2SHLSC_RRR_0_OPCODE_X0:
> + case V2SHL_RRR_0_OPCODE_X0:
> + case V2SHRS_RRR_0_OPCODE_X0:
> + case V2SHRU_RRR_0_OPCODE_X0:
> + case V2SUBSC_RRR_0_OPCODE_X0:
> + case V2SUB_RRR_0_OPCODE_X0:
> + case V4ADDSC_RRR_0_OPCODE_X0:
> + case V4ADD_RRR_0_OPCODE_X0:
> + case V4PACKSC_RRR_0_OPCODE_X0:
> + case V4SHLSC_RRR_0_OPCODE_X0:
> + case V4SHL_RRR_0_OPCODE_X0:
> + case V4SHRS_RRR_0_OPCODE_X0:
> + case V4SHRU_RRR_0_OPCODE_X0:
> + case V4SUBSC_RRR_0_OPCODE_X0:
> + case V4SUB_RRR_0_OPCODE_X0:
> + case V1DDOTPUA_RRR_0_OPCODE_X0:
> + case V1DDOTPU_RRR_0_OPCODE_X0:
> + case V1DOTPUA_RRR_0_OPCODE_X0:
> + case V1DOTPU_RRR_0_OPCODE_X0:
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP rrr_0_opcode_x0, [" FMT64X "]\n", bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_shift_opcode_x0(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_X0(bundle);
> + uint8_t rdst = get_Dest_X0(bundle);
> + uint8_t shamt = get_ShAmt_X0(bundle);
> +
> + switch (get_ShiftOpcodeExtension_X0(bundle)) {
> + case ROTLI_SHIFT_OPCODE_X0:
> + gen_rotli(dc, rdst, rsrc, shamt);
> + return;
> + case SHLI_SHIFT_OPCODE_X0:
> + gen_shli(dc, rdst, rsrc, shamt);
> + return;
> + case SHLXI_SHIFT_OPCODE_X0:
> + gen_shlxi(dc, rdst, rsrc, shamt);
> + return;
> + case SHRSI_SHIFT_OPCODE_X0:
> + gen_shrsi(dc, rdst, rsrc, shamt);
> + return;
> + case SHRUI_SHIFT_OPCODE_X0:
> + gen_shrui(dc, rdst, rsrc, shamt);
> + return;
> + case SHRUXI_SHIFT_OPCODE_X0:
> + gen_shruxi(dc, rdst, rsrc, shamt);
> + return;
> + case V1SHRUI_SHIFT_OPCODE_X0:
> + gen_v1shrui(dc, rdst, rsrc, shamt);
> + return;
> + case V1SHLI_SHIFT_OPCODE_X0:
> + case V1SHRSI_SHIFT_OPCODE_X0:
> + case V2SHLI_SHIFT_OPCODE_X0:
> + case V2SHRSI_SHIFT_OPCODE_X0:
> + case V2SHRUI_SHIFT_OPCODE_X0:
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP shift_opcode_x0, [" FMT64X "]\n", bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_branch_opcode_x1(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t src = get_SrcA_X1(bundle);
> + int32_t off = sign_extend(get_BrOff_X1(bundle), 17);
> +
> + switch (get_BrType_X1(bundle)) {
> + case BEQZT_BRANCH_OPCODE_X1:
> + case BEQZ_BRANCH_OPCODE_X1:
> + gen_b(dc, src, off, TCG_COND_EQ, "beqz(t)");
> + return;
> + case BNEZT_BRANCH_OPCODE_X1:
> + case BNEZ_BRANCH_OPCODE_X1:
> + gen_b(dc, src, off, TCG_COND_NE, "bnez(t)");
> + return;
> + case BLBCT_BRANCH_OPCODE_X1:
> + case BLBC_BRANCH_OPCODE_X1:
> + gen_blb(dc, src, off, TCG_COND_EQ, "blbc(t)");
> + return;
> + case BLBST_BRANCH_OPCODE_X1:
> + case BLBS_BRANCH_OPCODE_X1:
> + gen_blb(dc, src, off, TCG_COND_NE, "blbs(t)");
> + return;
> + case BLEZT_BRANCH_OPCODE_X1:
> + case BLEZ_BRANCH_OPCODE_X1:
> + gen_b(dc, src, off, TCG_COND_LE, "blez(t)");
> + return;
> + case BLTZT_BRANCH_OPCODE_X1:
> + case BLTZ_BRANCH_OPCODE_X1:
> + gen_b(dc, src, off, TCG_COND_LT, "bltz(t)");
> + return;
> + case BGTZT_BRANCH_OPCODE_X1:
> + case BGTZ_BRANCH_OPCODE_X1:
> + gen_b(dc, src, off, TCG_COND_GT, "bgtz(t)");
> + return;
> + case BGEZT_BRANCH_OPCODE_X1:
> + case BGEZ_BRANCH_OPCODE_X1:
> + gen_b(dc, src, off, TCG_COND_GE, "bgez(t)");
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_imm8_opcode_x1(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_X1(bundle);
> + uint8_t rdst = get_Dest_X1(bundle);
> + int8_t imm8 = get_Imm8_X1(bundle);
> + uint8_t rsrcb = get_SrcB_X1(bundle);
> + int8_t dimm8 = get_Dest_Imm8_X1(bundle);
> +
> + switch (get_Imm8OpcodeExtension_X1(bundle)) {
> + case ADDI_IMM8_OPCODE_X1:
> + gen_addimm(dc, rdst, rsrc, imm8);
> + return;
> + case ADDXI_IMM8_OPCODE_X1:
> + gen_addximm(dc, rdst, rsrc, imm8);
> + return;
> + case ANDI_IMM8_OPCODE_X1:
> + gen_andi(dc, rdst, rsrc, imm8);
> + return;
> + case CMPEQI_IMM8_OPCODE_X1:
> + gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_EQ, "cmpeqi");
> + return;
> + case CMPLTSI_IMM8_OPCODE_X1:
> + gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_LT, "cmpltsi");
> + return;
> + case CMPLTUI_IMM8_OPCODE_X1:
> + gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_LTU, "cmpltui");
> + return;
> + case LD1S_ADD_IMM8_OPCODE_X1:
> + gen_ld_add(dc, rdst, rsrc, imm8, MO_SB, "ld1s_add");
> + return;
> + case LD1U_ADD_IMM8_OPCODE_X1:
> + gen_ld_add(dc, rdst, rsrc, imm8, MO_UB, "ld1u_add");
> + return;
> + case LD2S_ADD_IMM8_OPCODE_X1:
> + gen_ld_add(dc, rdst, rsrc, imm8, MO_LESW, "ld2s_add");
> + return;
> + case LD2U_ADD_IMM8_OPCODE_X1:
> + gen_ld_add(dc, rdst, rsrc, imm8, MO_LEUW, "ld2u_add");
> + return;
> + case LD4S_ADD_IMM8_OPCODE_X1:
> + gen_ld_add(dc, rdst, rsrc, imm8, MO_LESL, "ld4s_add");
> + return;
> + case LD4U_ADD_IMM8_OPCODE_X1:
> + gen_ld_add(dc, rdst, rsrc, imm8, MO_LEUL, "ld4u_add");
> + return;
> + case LD_ADD_IMM8_OPCODE_X1:
> + gen_ld_add(dc, rdst, rsrc, imm8, MO_LEQ, "ld(na)_add");
> + return;
> + case MFSPR_IMM8_OPCODE_X1:
> + gen_mfspr(dc, rdst, get_MF_Imm14_X1(bundle));
> + return;
> + case MTSPR_IMM8_OPCODE_X1:
> + gen_mtspr(dc, rsrc, get_MT_Imm14_X1(bundle));
> + return;
> + case ORI_IMM8_OPCODE_X1:
> + gen_ori(dc, rdst, rsrc, imm8);
> + return;
> + case ST_ADD_IMM8_OPCODE_X1:
> + gen_st_add(dc, rsrc, rsrcb, dimm8, MO_LEQ, "st_add");
> + return;
> + case ST1_ADD_IMM8_OPCODE_X1:
> + gen_st_add(dc, rsrc, rsrcb, dimm8, MO_UB, "st1_add");
> + return;
> + case ST2_ADD_IMM8_OPCODE_X1:
> + gen_st_add(dc, rsrc, rsrcb, dimm8, MO_LEUW, "st2_add");
> + return;
> + case ST4_ADD_IMM8_OPCODE_X1:
> + gen_st_add(dc, rsrc, rsrcb, dimm8, MO_LEUL, "st4_add");
> + return;
> + case V1CMPEQI_IMM8_OPCODE_X1:
> + gen_v1cmpi(dc, rdst, rsrc, imm8, TCG_COND_EQ, "v1cmpeqi");
> + return;
> + case V1CMPLTSI_IMM8_OPCODE_X1:
> + gen_v1cmpi(dc, rdst, rsrc, imm8, TCG_COND_LT, "v1cmpltsi");
> + return;
> + case V1CMPLTUI_IMM8_OPCODE_X1:
> + gen_v1cmpi(dc, rdst, rsrc, imm8, TCG_COND_LTU, "v1cmpltui");
> + return;
> + case XORI_IMM8_OPCODE_X1:
> + gen_xori(dc, rdst, rsrc, imm8);
> + return;
> + case LDNT1S_ADD_IMM8_OPCODE_X1:
> + case LDNT1U_ADD_IMM8_OPCODE_X1:
> + case LDNT2S_ADD_IMM8_OPCODE_X1:
> + case LDNT2U_ADD_IMM8_OPCODE_X1:
> + case LDNT4S_ADD_IMM8_OPCODE_X1:
> + case LDNT4U_ADD_IMM8_OPCODE_X1:
> + case LDNT_ADD_IMM8_OPCODE_X1:
> + case LWNA_ADD_IMM8_OPCODE_X1:
> + case STNT1_ADD_IMM8_OPCODE_X1:
> + case STNT2_ADD_IMM8_OPCODE_X1:
> + case STNT4_ADD_IMM8_OPCODE_X1:
> + case STNT_ADD_IMM8_OPCODE_X1:
> + case V1ADDI_IMM8_OPCODE_X1:
> + case V1MAXUI_IMM8_OPCODE_X1:
> + case V1MINUI_IMM8_OPCODE_X1:
> + case V2ADDI_IMM8_OPCODE_X1:
> + case V2CMPEQI_IMM8_OPCODE_X1:
> + case V2CMPLTSI_IMM8_OPCODE_X1:
> + case V2CMPLTUI_IMM8_OPCODE_X1:
> + case V2MAXSI_IMM8_OPCODE_X1:
> + case V2MINSI_IMM8_OPCODE_X1:
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP imm8_opcode_x1, [" FMT64X "]\n", bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_jump_opcode_x1(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + int off = sign_extend(get_JumpOff_X1(bundle), 27);
> +
> + switch (get_JumpOpcodeExtension_X1(bundle)) {
> + case JAL_JUMP_OPCODE_X1:
> + gen_jal(dc, off);
> + return;
> + case J_JUMP_OPCODE_X1:
> + gen_j(dc, off);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_u_opcode_ex_x1(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_X1(bundle);
> + uint8_t rdst = get_Dest_X1(bundle);
> +
> + switch (get_UnaryOpcodeExtension_X1(bundle)) {
> + case NOP_UNARY_OPCODE_X1:
> + case FNOP_UNARY_OPCODE_X1:
> + if (!rdst && !rsrc) {
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "(f)nop\n");
> + return;
> + }
> + break;
> + case JALRP_UNARY_OPCODE_X1:
> + case JALR_UNARY_OPCODE_X1:
> + if (!rdst) {
> + gen_jalr(dc, rsrc);
> + return;
> + }
> + break;
> + case JRP_UNARY_OPCODE_X1:
> + case JR_UNARY_OPCODE_X1:
> + if (!rdst) {
> + gen_jr(dc, rsrc);
> + return;
> + }
> + break;
> + case LD1S_UNARY_OPCODE_X1:
> + gen_ld(dc, rdst, rsrc, MO_SB, "ld1s");
> + return;
> + case LD1U_UNARY_OPCODE_X1:
> + gen_ld(dc, rdst, rsrc, MO_UB, "ld1u");
> + return;
> + case LD2S_UNARY_OPCODE_X1:
> + gen_ld(dc, rdst, rsrc, MO_LESW, "ld2s");
> + return;
> + case LD2U_UNARY_OPCODE_X1:
> + gen_ld(dc, rdst, rsrc, MO_LEUW, "ld2u");
> + return;
> + case LD4S_UNARY_OPCODE_X1:
> + gen_ld(dc, rdst, rsrc, MO_LESL, "ld4s");
> + return;
> + case LD4U_UNARY_OPCODE_X1:
> + gen_ld(dc, rdst, rsrc, MO_LEUL, "ld4u");
> + return;
> + case LDNA_UNARY_OPCODE_X1:
> + case LD_UNARY_OPCODE_X1:
> + gen_ld(dc, rdst, rsrc, MO_LEQ, "ld(na)");
> + return;
> + case LNK_UNARY_OPCODE_X1:
> + if (!rsrc) {
> + gen_lnk(dc, rdst);
> + return;
> + }
> + break;
> + case MF_UNARY_OPCODE_X1:
> + if (!rdst && !rsrc) {
> + gen_mf(dc);
> + return;
> + }
> + break;
> + case SWINT1_UNARY_OPCODE_X1:
> + if (!rsrc && !rdst) {
> + gen_swint1(dc);
> + return;
> + }
> + break;
> + case WH64_UNARY_OPCODE_X1:
> + if (!rdst) {
> + gen_wh64(dc, rsrc);
> + return;
> + }
> + break;
> + case DRAIN_UNARY_OPCODE_X1:
> + case DTLBPR_UNARY_OPCODE_X1:
> + case FINV_UNARY_OPCODE_X1:
> + case FLUSHWB_UNARY_OPCODE_X1:
> + case FLUSH_UNARY_OPCODE_X1:
> + case ICOH_UNARY_OPCODE_X1:
> + case ILL_UNARY_OPCODE_X1:
> + case INV_UNARY_OPCODE_X1:
> + case IRET_UNARY_OPCODE_X1:
> + case LDNT1S_UNARY_OPCODE_X1:
> + case LDNT1U_UNARY_OPCODE_X1:
> + case LDNT2S_UNARY_OPCODE_X1:
> + case LDNT2U_UNARY_OPCODE_X1:
> + case LDNT4S_UNARY_OPCODE_X1:
> + case LDNT4U_UNARY_OPCODE_X1:
> + case LDNT_UNARY_OPCODE_X1:
> + case NAP_UNARY_OPCODE_X1:
> + case SWINT0_UNARY_OPCODE_X1:
> + case SWINT2_UNARY_OPCODE_X1:
> + case SWINT3_UNARY_OPCODE_X1:
> + break;
> + default:
> + g_assert_not_reached();
> + }
> +
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP decode_u_opcode_ex_x1, [" FMT64X "]\n", bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> +}
> +
> +static void decode_rrr_0_opcode_x1(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_X1(bundle);
> + uint8_t rsrcb = get_SrcB_X1(bundle);
> + uint8_t rdst = get_Dest_X1(bundle);
> +
> + switch (get_RRROpcodeExtension_X1(bundle)) {
> + case ADDX_RRR_0_OPCODE_X1:
> + gen_addx(dc, rdst, rsrc, rsrcb);
> + return;
> + case ADDXSC_RRR_0_OPCODE_X1:
> + gen_addxsc(dc, rdst, rsrc, rsrcb);
> + return;
> + case ADD_RRR_0_OPCODE_X1:
> + gen_add(dc, rdst, rsrc, rsrcb);
> + return;
> + case AND_RRR_0_OPCODE_X1:
> + gen_and(dc, rdst, rsrc, rsrcb);
> + return;
> + case CMPEQ_RRR_0_OPCODE_X1:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "cmpeq");
> + return;
> + case CMPLES_RRR_0_OPCODE_X1:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE, "cmples");
> + return;
> + case CMPLEU_RRR_0_OPCODE_X1:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU, "cmpleu");
> + return;
> + case CMPEXCH4_RRR_0_OPCODE_X1:
> + gen_atomic_excp(dc, rdst, rsrc, rsrcb,
> + TILEGX_EXCP_OPCODE_CMPEXCH4, "cmpexch4");
> + return;
> + case CMPEXCH_RRR_0_OPCODE_X1:
> + gen_atomic_excp(dc, rdst, rsrc, rsrcb,
> + TILEGX_EXCP_OPCODE_CMPEXCH, "cmpexch");
> + return;
> + case CMPLTS_RRR_0_OPCODE_X1:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT, "cmplts");
> + return;
> + case CMPLTU_RRR_0_OPCODE_X1:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU, "cmpltu");
> + return;
> + case CMPNE_RRR_0_OPCODE_X1:
> + gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "cmpne");
> + return;
> + case EXCH4_RRR_0_OPCODE_X1:
> + gen_atomic_excp(dc, rdst, rsrc, rsrcb,
> + TILEGX_EXCP_OPCODE_EXCH4, "exch4");
> + return;
> + case EXCH_RRR_0_OPCODE_X1:
> + gen_atomic_excp(dc, rdst, rsrc, rsrcb,
> + TILEGX_EXCP_OPCODE_EXCH, "exch");
> + return;
> + case FETCHADD_RRR_0_OPCODE_X1:
> + gen_atomic_excp(dc, rdst, rsrc, rsrcb,
> + TILEGX_EXCP_OPCODE_FETCHADD, "fetchadd");
> + return;
> + case FETCHADD4_RRR_0_OPCODE_X1:
> + gen_atomic_excp(dc, rdst, rsrc, rsrcb,
> + TILEGX_EXCP_OPCODE_FETCHADD4, "fetchadd4");
> + return;
> + case FETCHADDGEZ_RRR_0_OPCODE_X1:
> + gen_atomic_excp(dc, rdst, rsrc, rsrcb,
> + TILEGX_EXCP_OPCODE_FETCHADDGEZ, "fetchaddgez");
> + return;
> + case FETCHADDGEZ4_RRR_0_OPCODE_X1:
> + gen_atomic_excp(dc, rdst, rsrc, rsrcb,
> + TILEGX_EXCP_OPCODE_FETCHADDGEZ4, "fetchaddgez4");
> + return;
> + case FETCHAND_RRR_0_OPCODE_X1:
> + gen_atomic_excp(dc, rdst, rsrc, rsrcb,
> + TILEGX_EXCP_OPCODE_FETCHAND, "fetchand");
> + return;
> + case FETCHAND4_RRR_0_OPCODE_X1:
> + gen_atomic_excp(dc, rdst, rsrc, rsrcb,
> + TILEGX_EXCP_OPCODE_FETCHAND4, "fetchand4");
> + return;
> + case FETCHOR_RRR_0_OPCODE_X1:
> + gen_atomic_excp(dc, rdst, rsrc, rsrcb,
> + TILEGX_EXCP_OPCODE_FETCHOR, "fetchor");
> + return;
> + case FETCHOR4_RRR_0_OPCODE_X1:
> + gen_atomic_excp(dc, rdst, rsrc, rsrcb,
> + TILEGX_EXCP_OPCODE_FETCHOR4, "fetchor4");
> + return;
> + case MZ_RRR_0_OPCODE_X1:
> + gen_menz(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "mz");
> + return;
> + case MNZ_RRR_0_OPCODE_X1:
> + gen_menz(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "mnz");
> + return;
> + case NOR_RRR_0_OPCODE_X1:
> + gen_nor(dc, rdst, rsrc, rsrcb);
> + return;
> + case OR_RRR_0_OPCODE_X1:
> + gen_or(dc, rdst, rsrc, rsrcb);
> + return;
> + case ROTL_RRR_0_OPCODE_X1:
> + gen_rotl(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHL1ADDX_RRR_0_OPCODE_X1:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 1, true);
> + return;
> + case SHL1ADD_RRR_0_OPCODE_X1:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 1, false);
> + return;
> + case SHL2ADDX_RRR_0_OPCODE_X1:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 2, true);
> + return;
> + case SHL2ADD_RRR_0_OPCODE_X1:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 2, false);
> + return;
> + case SHL3ADDX_RRR_0_OPCODE_X1:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 3, true);
> + return;
> + case SHL3ADD_RRR_0_OPCODE_X1:
> + gen_shladd(dc, rdst, rsrc, rsrcb, 3, false);
> + return;
> + case SHLX_RRR_0_OPCODE_X1:
> + gen_shlx(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHL_RRR_0_OPCODE_X1:
> + gen_shl(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHRS_RRR_0_OPCODE_X1:
> + gen_shrs(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHRUX_RRR_0_OPCODE_X1:
> + gen_shrux(dc, rdst, rsrc, rsrcb);
> + return;
> + case SHRU_RRR_0_OPCODE_X1:
> + gen_shru(dc, rdst, rsrc, rsrcb);
> + return;
> + case SUB_RRR_0_OPCODE_X1:
> + gen_sub(dc, rdst, rsrc, rsrcb);
> + return;
> + case SUBX_RRR_0_OPCODE_X1:
> + gen_subx(dc, rdst, rsrc, rsrcb);
> + return;
> + case ST1_RRR_0_OPCODE_X1:
> + if (!rdst) {
> + gen_st(dc, rsrc, rsrcb, MO_UB, "st1");
> + return;
> + }
> + break;
> + case ST2_RRR_0_OPCODE_X1:
> + if (!rdst) {
> + gen_st(dc, rsrc, rsrcb, MO_LEUW, "st2");
> + return;
> + }
> + break;
> + case ST4_RRR_0_OPCODE_X1:
> + if (!rdst) {
> + gen_st(dc, rsrc, rsrcb, MO_LEUL, "st4");
> + return;
> + }
> + break;
> + case ST_RRR_0_OPCODE_X1:
> + if (!rdst) {
> + gen_st(dc, rsrc, rsrcb, MO_LEQ, "st");
> + return;
> + }
> + break;
> + case UNARY_RRR_0_OPCODE_X1:
> + return decode_u_opcode_ex_x1(dc, bundle);
> + case V1INT_L_RRR_0_OPCODE_X1:
> + gen_v1int_l(dc, rdst, rsrc, rsrcb);
> + return;
> + case V4INT_L_RRR_0_OPCODE_X1:
> + gen_v4int_l(dc, rdst, rsrc, rsrcb);
> + return;
> + case V1CMPEQ_RRR_0_OPCODE_X1:
> + gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ, "v1cmpeq");
> + return;
> + case V1CMPLES_RRR_0_OPCODE_X1:
> + gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE, "v1cmples");
> + return;
> + case V1CMPLEU_RRR_0_OPCODE_X1:
> + gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU, "v1cmpleu");
> + return;
> + case V1CMPLTS_RRR_0_OPCODE_X1:
> + gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT, "v1cmplts");
> + return;
> + case V1CMPLTU_RRR_0_OPCODE_X1:
> + gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU, "v1cmpltu");
> + return;
> + case V1CMPNE_RRR_0_OPCODE_X1:
> + gen_v1cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE, "v1cmpne");
> + return;
> + case XOR_RRR_0_OPCODE_X1:
> + gen_xor(dc, rdst, rsrc, rsrcb);
> + return;
> + case DBLALIGN2_RRR_0_OPCODE_X1:
> + case DBLALIGN4_RRR_0_OPCODE_X1:
> + case DBLALIGN6_RRR_0_OPCODE_X1:
> + case STNT1_RRR_0_OPCODE_X1:
> + case STNT2_RRR_0_OPCODE_X1:
> + case STNT4_RRR_0_OPCODE_X1:
> + case STNT_RRR_0_OPCODE_X1:
> + case SUBXSC_RRR_0_OPCODE_X1:
> + case V1INT_H_RRR_0_OPCODE_X1:
> + case V2INT_H_RRR_0_OPCODE_X1:
> + case V2INT_L_RRR_0_OPCODE_X1:
> + case V4INT_H_RRR_0_OPCODE_X1:
> + case V1ADDUC_RRR_0_OPCODE_X1:
> + case V1ADD_RRR_0_OPCODE_X1:
> + case V1MAXU_RRR_0_OPCODE_X1:
> + case V1MINU_RRR_0_OPCODE_X1:
> + case V1MNZ_RRR_0_OPCODE_X1:
> + case V1MZ_RRR_0_OPCODE_X1:
> + case V1SHL_RRR_0_OPCODE_X1:
> + case V1SHRS_RRR_0_OPCODE_X1:
> + case V1SHRU_RRR_0_OPCODE_X1:
> + case V1SUBUC_RRR_0_OPCODE_X1:
> + case V1SUB_RRR_0_OPCODE_X1:
> + case V2ADDSC_RRR_0_OPCODE_X1:
> + case V2ADD_RRR_0_OPCODE_X1:
> + case V2CMPEQ_RRR_0_OPCODE_X1:
> + case V2CMPLES_RRR_0_OPCODE_X1:
> + case V2CMPLEU_RRR_0_OPCODE_X1:
> + case V2CMPLTS_RRR_0_OPCODE_X1:
> + case V2CMPLTU_RRR_0_OPCODE_X1:
> + case V2CMPNE_RRR_0_OPCODE_X1:
> + case V2MAXS_RRR_0_OPCODE_X1:
> + case V2MINS_RRR_0_OPCODE_X1:
> + case V2MNZ_RRR_0_OPCODE_X1:
> + case V2MZ_RRR_0_OPCODE_X1:
> + case V2PACKH_RRR_0_OPCODE_X1:
> + case V2PACKL_RRR_0_OPCODE_X1:
> + case V2PACKUC_RRR_0_OPCODE_X1:
> + case V2SHLSC_RRR_0_OPCODE_X1:
> + case V2SHL_RRR_0_OPCODE_X1:
> + case V2SHRS_RRR_0_OPCODE_X1:
> + case V2SHRU_RRR_0_OPCODE_X1:
> + case V2SUBSC_RRR_0_OPCODE_X1:
> + case V2SUB_RRR_0_OPCODE_X1:
> + case V4ADDSC_RRR_0_OPCODE_X1:
> + case V4ADD_RRR_0_OPCODE_X1:
> + case V4PACKSC_RRR_0_OPCODE_X1:
> + case V4SHLSC_RRR_0_OPCODE_X1:
> + case V4SHL_RRR_0_OPCODE_X1:
> + case V4SHRS_RRR_0_OPCODE_X1:
> + case V4SHRU_RRR_0_OPCODE_X1:
> + case V4SUBSC_RRR_0_OPCODE_X1:
> + case V4SUB_RRR_0_OPCODE_X1:
> + break;
> + default:
> + g_assert_not_reached();
> + }
> +
> + qemu_log_mask(LOG_UNIMP, "UNIMP rrr_0_opcode_x1, [" FMT64X "]\n", bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> +}
> +
> +static void decode_shift_opcode_x1(struct DisasContext *dc,
> + tilegx_bundle_bits bundle)
> +{
> + uint8_t rsrc = get_SrcA_X1(bundle);
> + uint8_t rdst = get_Dest_X1(bundle);
> + uint8_t shamt = get_ShAmt_X1(bundle);
> +
> + switch (get_ShiftOpcodeExtension_X1(bundle)) {
> + case ROTLI_SHIFT_OPCODE_X1:
> + gen_rotli(dc, rdst, rsrc, shamt);
> + return;
> + case SHLI_SHIFT_OPCODE_X1:
> + gen_shli(dc, rdst, rsrc, shamt);
> + return;
> + case SHLXI_SHIFT_OPCODE_X1:
> + gen_shlxi(dc, rdst, rsrc, shamt);
> + return;
> + case SHRSI_SHIFT_OPCODE_X1:
> + gen_shrsi(dc, rdst, rsrc, shamt);
> + return;
> + case SHRUI_SHIFT_OPCODE_X1:
> + gen_shrui(dc, rdst, rsrc, shamt);
> + return;
> + case SHRUXI_SHIFT_OPCODE_X1:
> + gen_shruxi(dc, rdst, rsrc, shamt);
> + return;
> + case V1SHRUI_SHIFT_OPCODE_X1:
> + gen_v1shrui(dc, rdst, rsrc, shamt);
> + return;
> + case V1SHLI_SHIFT_OPCODE_X1:
> + case V1SHRSI_SHIFT_OPCODE_X1:
> + case V2SHLI_SHIFT_OPCODE_X1:
> + case V2SHRSI_SHIFT_OPCODE_X1:
> + case V2SHRUI_SHIFT_OPCODE_X1:
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP shift_opcode_x1, [" FMT64X "]\n", bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_y0(struct DisasContext *dc, tilegx_bundle_bits bundle)
> +{
> + unsigned int opcode = get_Opcode_Y0(bundle);
> + uint8_t rsrc = get_SrcA_Y0(bundle);
> + uint8_t rdst = get_Dest_Y0(bundle);
> + int8_t imm8 = get_Imm8_Y0(bundle);
> +
> + switch (opcode) {
> + case ADDI_OPCODE_Y0:
> + gen_addimm(dc, rdst, rsrc, imm8);
> + return;
> + case ADDXI_OPCODE_Y0:
> + gen_addximm(dc, rdst, rsrc, imm8);
> + return;
> + case ANDI_OPCODE_Y0:
> + gen_andi(dc, rdst, rsrc, imm8);
> + return;
> + case CMPEQI_OPCODE_Y0:
> + gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_EQ, "cmpeqi");
> + return;
> + case CMPLTSI_OPCODE_Y0:
> + gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_LT, "cmpltsi");
> + return;
> + case RRR_0_OPCODE_Y0:
> + decode_rrr_0_opcode_y0(dc, bundle);
> + return;
> + case RRR_1_OPCODE_Y0:
> + decode_rrr_1_opcode_y0(dc, bundle);
> + return;
> + case RRR_2_OPCODE_Y0:
> + decode_rrr_2_opcode_y0(dc, bundle);
> + return;
> + case RRR_3_OPCODE_Y0:
> + decode_rrr_3_opcode_y0(dc, bundle);
> + return;
> + case RRR_4_OPCODE_Y0:
> + decode_rrr_4_opcode_y0(dc, bundle);
> + return;
> + case RRR_5_OPCODE_Y0:
> + decode_rrr_5_opcode_y0(dc, bundle);
> + return;
> + case RRR_6_OPCODE_Y0:
> + decode_rrr_6_opcode_y0(dc, bundle);
> + return;
> + case RRR_9_OPCODE_Y0:
> + decode_rrr_9_opcode_y0(dc, bundle);
> + return;
> + case SHIFT_OPCODE_Y0:
> + decode_shift_opcode_y0(dc, bundle);
> + return;
> + case RRR_7_OPCODE_Y0:
> + case RRR_8_OPCODE_Y0:
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP y0, opcode %d, bundle [" FMT64X "]\n",
> + opcode, bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_y1(struct DisasContext *dc, tilegx_bundle_bits bundle)
> +{
> + unsigned int opcode = get_Opcode_Y1(bundle);
> + uint8_t rsrc = get_SrcA_Y1(bundle);
> + uint8_t rdst = get_Dest_Y1(bundle);
> + int8_t imm8 = get_Imm8_Y1(bundle);
> +
> + switch (opcode) {
> + case ADDI_OPCODE_Y1:
> + gen_addimm(dc, rdst, rsrc, imm8);
> + return;
> + case ADDXI_OPCODE_Y1:
> + gen_addximm(dc, rdst, rsrc, imm8);
> + return;
> + case ANDI_OPCODE_Y1:
> + gen_andi(dc, rdst, rsrc, imm8);
> + return;
> + case CMPEQI_OPCODE_Y1:
> + gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_EQ, "cmpeqi");
> + return;
> + case CMPLTSI_OPCODE_Y1:
> + gen_cmpi(dc, rdst, rsrc, imm8, TCG_COND_LT, "cmpltsi");
> + return;
> + case RRR_0_OPCODE_Y1:
> + decode_rrr_0_opcode_y1(dc, bundle);
> + return;
> + case RRR_1_OPCODE_Y1:
> + decode_rrr_1_opcode_y1(dc, bundle);
> + return;
> + case RRR_2_OPCODE_Y1:
> + decode_rrr_2_opcode_y1(dc, bundle);
> + return;
> + case RRR_3_OPCODE_Y1:
> + decode_rrr_3_opcode_y1(dc, bundle);
> + return;
> + case RRR_5_OPCODE_Y1:
> + decode_rrr_5_opcode_y1(dc, bundle);
> + return;
> + case SHIFT_OPCODE_Y1:
> + decode_shift_opcode_y1(dc, bundle);
> + return;
> + case RRR_4_OPCODE_Y1:
> + case RRR_6_OPCODE_Y1:
> + case RRR_7_OPCODE_Y1:
> + qemu_log_mask(LOG_UNIMP,
> + "UNIMP y1, opcode %d, bundle [" FMT64X "]\n",
> + opcode, bundle);
> + set_exception(dc, TILEGX_EXCP_OPCODE_UNIMPLEMENTED);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_y2(struct DisasContext *dc, tilegx_bundle_bits bundle)
> +{
> + unsigned int opcode = get_Opcode_Y2(bundle);
> +
> + switch (opcode) {
> + case 0: /* LD1S_OPCODE_Y2, ST1_OPCODE_Y2 */
> + decode_ldst0_opcode_y2(dc, bundle);
> + return;
> + case 1: /* LD4S_OPCODE_Y2, LD1U_OPCODE_Y2, ST2_OPCODE_Y2 */
> + decode_ldst1_opcode_y2(dc, bundle);
> + return;
> + case 2: /* LD2S_OPCODE_Y2, LD4U_OPCODE_Y2, ST4_OPCODE_Y2 */
> + decode_ldst2_opcode_y2(dc, bundle);
> + return;
> + case 3: /* LD_OPCODE_Y2, ST_OPCODE_Y2, LD2U_OPCODE_Y2 */
> + decode_ldst3_opcode_y2(dc, bundle);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_x0(struct DisasContext *dc, tilegx_bundle_bits bundle)
> +{
> + unsigned int opcode = get_Opcode_X0(bundle);
> + uint8_t rsrc = get_SrcA_X0(bundle);
> + uint8_t rdst = get_Dest_X0(bundle);
> + int16_t imm16 = get_Imm16_X0(bundle);
> +
> + switch (opcode) {
> + case ADDLI_OPCODE_X0:
> + gen_addimm(dc, rdst, rsrc, imm16);
> + return;
> + case ADDXLI_OPCODE_X0:
> + gen_addximm(dc, rdst, rsrc, imm16);
> + return;
> + case BF_OPCODE_X0:
> + decode_bf_opcode_x0(dc, bundle);
> + return;
> + case IMM8_OPCODE_X0:
> + decode_imm8_opcode_x0(dc, bundle);
> + return;
> + case RRR_0_OPCODE_X0:
> + decode_rrr_0_opcode_x0(dc, bundle);
> + return;
> + case SHIFT_OPCODE_X0:
> + decode_shift_opcode_x0(dc, bundle);
> + return;
> + case SHL16INSLI_OPCODE_X0:
> + gen_shl16insli(dc, rdst, rsrc, (uint16_t)imm16);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void decode_x1(struct DisasContext *dc, tilegx_bundle_bits bundle)
> +{
> + unsigned int opcode = get_Opcode_X1(bundle);
> + uint8_t rsrc = (uint8_t)get_SrcA_X1(bundle);
> + uint8_t rdst = (uint8_t)get_Dest_X1(bundle);
> + int16_t imm16 = (int16_t)get_Imm16_X1(bundle);
> +
> + switch (opcode) {
> + case ADDLI_OPCODE_X1:
> + gen_addimm(dc, rdst, rsrc, imm16);
> + return;
> + case ADDXLI_OPCODE_X1:
> + gen_addximm(dc, rdst, rsrc, imm16);
> + return;
> + case BRANCH_OPCODE_X1:
> + decode_branch_opcode_x1(dc, bundle);
> + return;
> + case IMM8_OPCODE_X1:
> + decode_imm8_opcode_x1(dc, bundle);
> + return;
> + case JUMP_OPCODE_X1:
> + decode_jump_opcode_x1(dc, bundle);
> + return;
> + case RRR_0_OPCODE_X1:
> + decode_rrr_0_opcode_x1(dc, bundle);
> + return;
> + case SHIFT_OPCODE_X1:
> + decode_shift_opcode_x1(dc, bundle);
> + return;
> + case SHL16INSLI_OPCODE_X1:
> + gen_shl16insli(dc, rdst, rsrc, (uint16_t)imm16);
> + return;
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +static void translate_one_bundle(struct DisasContext *dc, uint64_t bundle)
> +{
> + int i;
> + TCGv tmp;
> +
> + for (i = 0; i < TILEGX_TMP_REGS; i++) {
> + dc->tmp_regs[i].idx = TILEGX_R_NOREG;
> + TCGV_UNUSED_I64(dc->tmp_regs[i].val);
> + }
> + dc->tmp_regcur = dc->tmp_regs;
> +
> + if (unlikely(qemu_loglevel_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT))) {
> + tcg_gen_debug_insn_start(dc->pc);
> + }
> +
> + if (get_Mode(bundle)) {
> + decode_y0(dc, bundle);
> + decode_y1(dc, bundle);
> + decode_y2(dc, bundle);
> + } else {
> + decode_x0(dc, bundle);
> + decode_x1(dc, bundle);
> + }
> +
> + for (i = 0; i < TILEGX_TMP_REGS; i++) {
> + if (dc->tmp_regs[i].idx == TILEGX_R_NOREG) {
> + continue;
> + }
> + if (dc->tmp_regs[i].idx < TILEGX_R_COUNT) {
> + tcg_gen_mov_i64(cpu_regs[dc->tmp_regs[i].idx], dc->tmp_regs[i].val);
> + }
> + tcg_temp_free_i64(dc->tmp_regs[i].val);
> + }
> +
> + if (dc->jmp.cond != TCG_COND_NEVER) {
> + if (dc->jmp.cond == TCG_COND_ALWAYS) {
> + tcg_gen_mov_i64(cpu_pc, dc->jmp.dest);
> + } else {
> + tmp = tcg_const_i64(dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
> + tcg_gen_movcond_i64(dc->jmp.cond, cpu_pc,
> + dc->jmp.val1, dc->jmp.val2,
> + dc->jmp.dest, tmp);
> + tcg_temp_free_i64(dc->jmp.val1);
> + tcg_temp_free_i64(dc->jmp.val2);
> + tcg_temp_free_i64(tmp);
> + }
> + tcg_temp_free_i64(dc->jmp.dest);
> + tcg_gen_exit_tb(0);
> + }
> +}
> +
> +static inline void gen_intermediate_code_internal(TileGXCPU *cpu,
> + TranslationBlock *tb,
> + bool search_pc)
> +{
> + DisasContext ctx;
> + DisasContext *dc = &ctx;
> +
> + CPUTLGState *env = &cpu->env;
> + uint64_t pc_start = tb->pc;
> + uint64_t next_page_start = (pc_start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
> + int j, lj = -1;
> + int num_insns = 0;
> + int max_insns = tb->cflags & CF_COUNT_MASK;
> +
> + dc->pc = pc_start;
> + dc->exception = TILEGX_EXCP_NONE;
> + dc->jmp.cond = TCG_COND_NEVER;
> + TCGV_UNUSED_I64(dc->jmp.dest);
> + TCGV_UNUSED_I64(dc->jmp.val1);
> + TCGV_UNUSED_I64(dc->jmp.val2);
> +
> + if (!max_insns) {
> + max_insns = CF_COUNT_MASK;
> + }
> + gen_tb_start(tb);
> +
> + do {
> + TCGV_UNUSED_I64(dc->zero);
> + if (search_pc) {
> + j = tcg_op_buf_count();
> + if (lj < j) {
> + lj++;
> + while (lj < j) {
> + tcg_ctx.gen_opc_instr_start[lj++] = 0;
> + }
> + }
> + tcg_ctx.gen_opc_pc[lj] = dc->pc;
> + tcg_ctx.gen_opc_instr_start[lj] = 1;
> + tcg_ctx.gen_opc_icount[lj] = num_insns;
> + }
> + translate_one_bundle(dc, cpu_ldq_data(env, dc->pc));
> + num_insns++;
> + dc->pc += TILEGX_BUNDLE_SIZE_IN_BYTES;
> + if (dc->exception != TILEGX_EXCP_NONE) {
> + gen_exception(dc, dc->exception);
> + break;
> + }
> + } while (dc->jmp.cond == TCG_COND_NEVER && dc->pc < next_page_start
> + && num_insns < max_insns && !tcg_op_buf_full());
> +
> + if (dc->jmp.cond == TCG_COND_NEVER) {
> + tcg_gen_movi_i64(cpu_pc, dc->pc);
> + tcg_gen_exit_tb(0);
> + }
> +
> + gen_tb_end(tb, num_insns);
> + if (search_pc) {
> + j = tcg_op_buf_count();
> + lj++;
> + while (lj <= j) {
> + tcg_ctx.gen_opc_instr_start[lj++] = 0;
> + }
> + } else {
> + tb->size = dc->pc - pc_start;
> + tb->icount = num_insns;
> + }
> +
> + return;
> +}
> +
> +void gen_intermediate_code(CPUTLGState *env, struct TranslationBlock *tb)
> +{
> + gen_intermediate_code_internal(tilegx_env_get_cpu(env), tb, false);
> +}
> +
> +void gen_intermediate_code_pc(CPUTLGState *env, struct TranslationBlock *tb)
> +{
> + gen_intermediate_code_internal(tilegx_env_get_cpu(env), tb, true);
> +}
> +
> +void restore_state_to_opc(CPUTLGState *env, TranslationBlock *tb, int pc_pos)
> +{
> + env->pc = tcg_ctx.gen_opc_pc[pc_pos];
> +}
> +
> +void tilegx_tcg_init(void)
> +{
> + int i;
> +
> + cpu_env = tcg_global_reg_new_ptr(TCG_AREG0, "env");
> + cpu_pc = tcg_global_mem_new_i64(TCG_AREG0, offsetof(CPUTLGState, pc), "pc");
> + for (i = 0; i < TILEGX_R_COUNT; i++) {
> + cpu_regs[i] = tcg_global_mem_new_i64(TCG_AREG0,
> + offsetof(CPUTLGState, regs[i]),
> + reg_names[i]);
> + }
> + for (i = 0; i < TILEGX_SPR_COUNT; i++) {
> + cpu_spregs[i] = tcg_global_mem_new_i64(TCG_AREG0,
> + offsetof(CPUTLGState, spregs[i]),
> + spreg_names[i]);
> + }
> +#if defined(CONFIG_USER_ONLY)
> + cpu_excparam = tcg_global_mem_new_i32(TCG_AREG0,
> + offsetof(CPUTLGState, excparam),
> + "cpu_excparam");
> +#endif
> +}
>
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Qemu-devel] [PATCH 09/10 v12] target-tilegx: Generate tcg instructions to finish "Hello world"
2015-06-13 13:21 ` [Qemu-devel] [PATCH 09/10 v12] target-tilegx: Generate tcg instructions to finish "Hello world" Chen Gang
[not found] ` <55A76DB1.4090302@hotmail.com>
@ 2015-07-19 9:42 ` Chen Gang
2015-07-19 10:09 ` Chen Gang
2015-07-19 9:57 ` Chen Gang
2 siblings, 1 reply; 21+ messages in thread
From: Chen Gang @ 2015-07-19 9:42 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
On 6/13/15 21:21, Chen Gang wrote:
> +static void gen_st_add(struct DisasContext *dc,
> + uint8_t rsrc, uint8_t rsrcb, uint8_t imm8,
It needs int8_t instead of uint8_t for imm8, or it will cause memmove()
of glibc generates incorrect result with -O1/2/s optimization.
And tilegx linux-user still has another issues (at least 1 another bug),
I shall continue analyzing, hope I can finish them within this month.
Thanks.
> + TCGMemOp ops, const char *code)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, %d\n",
> + code, rsrc, rsrcb, imm8);
> + tcg_gen_qemu_st_i64(load_gr(dc, rsrcb), load_gr(dc, rsrc),
> + MMU_USER_IDX, ops);
> + tcg_gen_addi_i64(dest_gr(dc, rsrc), load_gr(dc, rsrc), imm8);
> +}
> +
Thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Qemu-devel] [PATCH 09/10 v12] target-tilegx: Generate tcg instructions to finish "Hello world"
2015-06-13 13:21 ` [Qemu-devel] [PATCH 09/10 v12] target-tilegx: Generate tcg instructions to finish "Hello world" Chen Gang
[not found] ` <55A76DB1.4090302@hotmail.com>
2015-07-19 9:42 ` Chen Gang
@ 2015-07-19 9:57 ` Chen Gang
2 siblings, 0 replies; 21+ messages in thread
From: Chen Gang @ 2015-07-19 9:57 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
On 6/13/15 21:21, Chen Gang wrote:
> +
> +static void decode_x1(struct DisasContext *dc, tilegx_bundle_bits bundle)
> +{
> + unsigned int opcode = get_Opcode_X1(bundle);
> + uint8_t rsrc = (uint8_t)get_SrcA_X1(bundle);
> + uint8_t rdst = (uint8_t)get_Dest_X1(bundle);
> + int16_t imm16 = (int16_t)get_Imm16_X1(bundle);
> +
These type cast should be removed.
Thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Qemu-devel] [PATCH 09/10 v12] target-tilegx: Generate tcg instructions to finish "Hello world"
2015-07-19 9:42 ` Chen Gang
@ 2015-07-19 10:09 ` Chen Gang
0 siblings, 0 replies; 21+ messages in thread
From: Chen Gang @ 2015-07-19 10:09 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
On 7/19/15 17:42, Chen Gang wrote:
>
> On 6/13/15 21:21, Chen Gang wrote:
>> +static void gen_st_add(struct DisasContext *dc,
>> + uint8_t rsrc, uint8_t rsrcb, uint8_t imm8,
>
> It needs int8_t instead of uint8_t for imm8, or it will cause memmove()
> of glibc generates incorrect result with -O1/2/s optimization.
>
This bug causes many various issues: after fix this bug, vi is OK.
Now, I am analyzing another issue, it is about stat64 failure issue.
Thanks
> And tilegx linux-user still has another issues (at least 1 another bug),
> I shall continue analyzing, hope I can finish them within this month.
>
> Thanks.
>
>> + TCGMemOp ops, const char *code)
>> +{
>> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "%s r%d, r%d, %d\n",
>> + code, rsrc, rsrcb, imm8);
>> + tcg_gen_qemu_st_i64(load_gr(dc, rsrcb), load_gr(dc, rsrc),
>> + MMU_USER_IDX, ops);
>> + tcg_gen_addi_i64(dest_gr(dc, rsrc), load_gr(dc, rsrc), imm8);
>> +}
>> +
>
> Thanks.
>
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Qemu-devel] [PATCH 02/10 v12] linux-user: Support tilegx architecture in linux-user
2015-06-13 13:10 ` [Qemu-devel] [PATCH 02/10 v12] linux-user: Support tilegx architecture in linux-user Chen Gang
@ 2015-07-19 11:31 ` Chen Gang
2015-07-19 21:52 ` Chen Gang
0 siblings, 1 reply; 21+ messages in thread
From: Chen Gang @ 2015-07-19 11:31 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
On 6/13/15 21:10, Chen Gang wrote:
> +
> +void cpu_loop(CPUTLGState *env)
> +{
> + CPUState *cs = CPU(tilegx_env_get_cpu(env));
> + int trapnr;
> +
> + while (1) {
> + cpu_exec_start(cs);
> + trapnr = cpu_tilegx_exec(env);
> + cpu_exec_end(cs);
> + switch (trapnr) {
> + case TILEGX_EXCP_SYSCALL:
> + env->regs[TILEGX_R_RE] = do_syscall(env, env->regs[TILEGX_R_NR],
> + env->regs[0], env->regs[1],
> + env->regs[2], env->regs[3],
> + env->regs[4], env->regs[5],
> + env->regs[6], env->regs[7]);
> + env->regs[TILEGX_R_ERR] = TILEGX_IS_ERRNO(env->regs[TILEGX_R_RE])
> + ? env->regs[TILEGX_R_RE]
It needs "- env->regs[TILEGX_R_RE]" instead of "env->regs[TILEGX_R_RE]".
For stat64, when return ENOENT, qemu will mark it as -ENOENT, so syscall
should revert it again.
> + : 0;
> + break;
>
Thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Qemu-devel] [PATCH 02/10 v12] linux-user: Support tilegx architecture in linux-user
2015-07-19 11:31 ` Chen Gang
@ 2015-07-19 21:52 ` Chen Gang
0 siblings, 0 replies; 21+ messages in thread
From: Chen Gang @ 2015-07-19 21:52 UTC (permalink / raw)
To: Peter Maydell, Chris Metcalf, rth@twiddle.net,
Andreas Färber
Cc: walt@tilera.com, Riku Voipio, qemu-devel
On 7/19/15 19:31, Chen Gang wrote:
> On 6/13/15 21:10, Chen Gang wrote:
>> +
>> +void cpu_loop(CPUTLGState *env)
>> +{
>> + CPUState *cs = CPU(tilegx_env_get_cpu(env));
>> + int trapnr;
>> +
>> + while (1) {
>> + cpu_exec_start(cs);
>> + trapnr = cpu_tilegx_exec(env);
>> + cpu_exec_end(cs);
>> + switch (trapnr) {
>> + case TILEGX_EXCP_SYSCALL:
>> + env->regs[TILEGX_R_RE] = do_syscall(env, env->regs[TILEGX_R_NR],
>> + env->regs[0], env->regs[1],
>> + env->regs[2], env->regs[3],
>> + env->regs[4], env->regs[5],
>> + env->regs[6], env->regs[7]);
>> + env->regs[TILEGX_R_ERR] = TILEGX_IS_ERRNO(env->regs[TILEGX_R_RE])
>> + ? env->regs[TILEGX_R_RE]
>
> It needs "- env->regs[TILEGX_R_RE]" instead of "env->regs[TILEGX_R_RE]".
>
> For stat64, when return ENOENT, qemu will mark it as -ENOENT, so syscall
> should revert it again.
>
After this fix, the tilegx linux-user can let busybox pass simple test
(manually using sh, ls, cp, mv, and vi are OK).
Next, I shall start gcc testsuite with tilegx linux-user. :-)
Thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2015-07-19 21:51 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-06-13 13:07 [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Chen Gang
2015-06-13 13:08 ` [Qemu-devel] [PATCH 01/10 v12] linux-user: tilegx: Firstly add architecture related features Chen Gang
2015-06-13 13:10 ` [Qemu-devel] [PATCH 02/10 v12] linux-user: Support tilegx architecture in linux-user Chen Gang
2015-07-19 11:31 ` Chen Gang
2015-07-19 21:52 ` Chen Gang
2015-06-13 13:13 ` [Qemu-devel] [PATCH 03/10 v12] linux-user/syscall.c: conditionally define syscalls which are not defined in tilegx Chen Gang
2015-06-13 13:14 ` [Qemu-devel] [PATCH 04/10 v12] target-tilegx: Add opcode basic implementation from Tilera Corporation Chen Gang
2015-06-13 13:15 ` [Qemu-devel] [PATCH 05/10 v12] target-tilegx/opcode_tilegx.h: Modify it to fit QEMU usage Chen Gang
2015-06-13 13:18 ` [Qemu-devel] [PATCH 07/10 v12] target-tilegx: Add cpu basic features for linux-user Chen Gang
2015-06-13 13:18 ` [Qemu-devel] [PATCH 06/10 v12] target-tilegx: Add special register information from Tilera Corporation Chen Gang
2015-06-13 13:19 ` [Qemu-devel] [PATCH 08/10 v12] target-tilegx: Add several helpers for instructions translation Chen Gang
[not found] ` <55A76DE6.4070103@hotmail.com>
2015-07-16 8:42 ` gchen gchen
2015-06-13 13:21 ` [Qemu-devel] [PATCH 09/10 v12] target-tilegx: Generate tcg instructions to finish "Hello world" Chen Gang
[not found] ` <55A76DB1.4090302@hotmail.com>
2015-07-16 8:43 ` gchen gchen
2015-07-19 9:42 ` Chen Gang
2015-07-19 10:09 ` Chen Gang
2015-07-19 9:57 ` Chen Gang
2015-06-13 13:22 ` [Qemu-devel] [PATCH 10/10 v12] target-tilegx: Add TILE-Gx building files Chen Gang
2015-06-18 22:02 ` [Qemu-devel] [PATCH 00/10 v12] tilegx: Firstly add tilegx target for linux-user Peter Maydell
2015-06-19 1:12 ` Chen Gang
2015-07-01 1:06 ` Chen Gang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).